Convergence problems with ADVI

Hi,

I am using minibatch ADVI to fit models for my use case, but for some reason I often very strange convergence/hist plots like this:

The elbo seems to stabilize at 30000, be stable for a while and then starts to get very unstable from around 60000.

I would like to know if you can point me to generic reasons that can cause such hist plots and what I could do to tackle this problem (I am new to the world of Bayesian modeling).

I would prefer not to show my model, as I got this problem with different models, so I was thinking the problem must be something more generic.
I am using hierarchical models and one thing that I saw is that when certain groups only have very small samples (1 or 2) they tend to be estimated very wrong.

My guess is that some RVs have gradient problem after a while.
A few generic solutions:

  1. trace the parameters at each iteration and see which one is causing the problem, see http://docs.pymc.io/notebooks/variational_api_quickstart.html#Tracking-parameters
  2. use more monte carlo sample to approximate the gradient: eg setting obj_n_mc=25 or higher (default is 1)
  3. try different optimizer with different learning rate, eg obj_optimizer=pm.adagrad(learning_rate=.01)
1 Like

Using a different obj_optimizer did the trick, to be specific adadelta with the defaults is providing very stable convergence results.
Thank you for pointing me there.