A negative result on the paper presented at SIMBig 2021
After presenting the paper “AmLDA: A Non-VAE Neural Topic Model” at SIMBig 2021 conference, the author reexamined the results and reached a negative result.
Our method achieved evaluation results comparable to SVI in Table 4. However, it has turned out that those results are achieved not by using an MLP-based parameterization but by setting N_{local} = 2, i.e., by performing the local parameter update of E-step twice.
We can obtain a good enough convergence of the local parameter estimation only by performing the E-step update twice, not dozens of times. This result also applies to SVI. Even in SVI, we can achieve a good enough convergence only by performing the local parameter update twice, not dozens of times.
Therefore, our proposed MLP-based parameterization of the encoder works just as an initialization for the local parameter update in Algorithm 3. TokenEncoder in Algorithm 3 gives almost the same output for all tokens. This output only works as an initial value for the following update computation.
This reexamination shows that our approach was not successful. However, some other neural network-based parameterization of the posterior distribution may work in the variational Bayesian inference for topic modeling.