I am trying to build a mixture model that contains TruncatedNormal
components, but was having trouble optimizing and sampling. I tried evaluating the model logp
at a few test values and traced back some oddities to find that TruncatedNormal
density is evaluating to zero at any input value I pass in. For example:
with pm.Model() as model:
mu = pm.Uniform("mu", -5, 2)
ln_std = pm.Uniform("ln_std", -8, 1)
std = pm.Deterministic("std", pm.math.exp(ln_std))
dist1 = pm.TruncatedNormal.dist(mu=mu, sigma=std, lower=0, upper=None)
dist2 = pm.Normal.dist(mu=mu, sigma=std)
x_grid = np.linspace(-5, 10, 1024)
dist1_logp = pm.Deterministic('dist1', pm.logp(dist1, x_grid))
dist2_logp = pm.Deterministic('dist2', pm.logp(dist2, x_grid))
func1 = model.compile_fn(dist1_logp, inputs=[mu, std])
func2 = model.compile_fn(dist2_logp, inputs=[mu, std])
init_p = {'mu': 1, 'std': 0.1}
print(func1(init_p))
print(func2(init_p))
Output:
[-inf -inf -inf ... -inf -inf -inf]
[-1798.61635344 -1789.8294493 -1781.06404481 ... -4022.26639085
-4035.43062232 -4048.61635344]
I think I’m misunderstanding something here - any ideas? Or is this a bug?
Thanks for reading!