NaN occurred in optimization with ADVI


I tried to infer following logistic regression model using ADVI,

 def invlogit(self, x):
        return tt.exp(x)/(1+ tt.exp(x))

 with pm.Model() as self.logistic_model:
            alpha = pm.Normal('alpha', mu=0, sd=20)
            beta = pm.Normal('beta', mu=0, sd=20, shape=X.shape[1])

            mu = alpha +, beta)
            p = pm.Deterministic('p', invlogit(mu))

            y = pm.Bernoulli('y', p=p,  observed=y)
            apprx =

Then I got the following error,

Average Loss = 1.0766e+07:   0%|          | 0/10000 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/nadheesh/PycharmProjects/mcmc_vs_variational/mcmc/", line 319, in <module>,y)
  File "/home/nadheesh/PycharmProjects/mcmc_vs_variational/mcmc/", line 193, in fit
    apprx =, obj_optimizer=pm.adam(), obj_n_mc=20)
  File "/home/nadheesh/anaconda3/envs/dev/lib/python3.5/site-packages/pymc3/variational/", line 756, in fit
    return, **kwargs)
  File "/home/nadheesh/anaconda3/envs/dev/lib/python3.5/site-packages/pymc3/variational/", line 135, in fit
    state = self._iterate_with_loss(0, n, step_func, progress, callbacks)
  File "/home/nadheesh/anaconda3/envs/dev/lib/python3.5/site-packages/pymc3/variational/", line 181, in _iterate_with_loss
    raise FloatingPointError('NaN occurred in optimization.')
FloatingPointError: NaN occurred in optimization.

I try to investigate this a little bit and found that this is cause by the NaN return by the step_function() when calculating the error. Moreover, If I change the Bernoulli distribution to a Normal distribution then model can be trained without any error. However, I can’t understand why this error is observed when using the Bernoulli likelihood.

I appreciate if someone can help me to resolve this issue.

How to use method

There are a few other discussions on this topic, did you have a look?


Thanks for pointing out @junpenglao. I checked if they are relevant and I could not find a solution to my question from those topics.

May be I’m too dumb understand the solution reading those posts, I appreciate if you can help me to understand why this error occurs?

The NaN returns when calling step_function during the approximation. Therefore, after checking if the e is NaN PyMC3 throws this error.

I tried using small learning rate, and check for this initialization as well.

Do you have any other suggestions @junpenglao ?


Usually, if you are seeing the the NaN problem in the first iteration, there is problem of the approximation set up that makes the score function (target to be optimized) invalid.
So, I would follow:

  1. check if the original model is set up correctly, see eg here.
  2. print the test value of the deterministic node to make sure that the value is valid, eg doing p.tag.test_value in your case.
  3. make sure the starting value of the approximation is valid:
point = apprx.groups[0].bij.rmap(apprx.params[0].eval())
point #check to see there is no NaN or Inf
for var in logistic_model.free_RVs:
    print(, var.logp(point))

If all the steps above pass fine, there is a problem with setting up the approximation score function, which is not that easy to diagnose. I would put everything in a jupyter notebook, and trace the error in %debug mode.


I found out what cause the error.

It was nothing to do with approximation set up as it seems. I had not normalized(scaled between 0 and 1) the datasets that I used for the training. For some reason, then step_function produce NaN when provided with values in different scale.

Then I tried to investigate a little bit more. It seems only when the abs(value) is large (<100 or >100) this error is seen. However, I also change the priors and likelihood and then identified that this error persists when only Bernoulli distribution is used.