Recreating Normal Distrbution using logp - sampling diverges

To get a sense how logp and CustomDist work, I attempted to re-create the normal distribution.

In my mind, both models should create the same results:


def ndichte(value,mu,sigma):
    res = -0.5 * pt.pow((value - mu) / sigma, 2) - pt.log(pt.sqrt(2.0 * np.pi)) - pt.log(sigma)
    return res

with pm.Model() as model1:
    mu = pm.Normal("mu")
    sigma = pm.HalfNormal("sigma")
    pm.CustomDist("Y", mu, sigma, logp=ndichte)


with pm.Model() as model2:
    mu = pm.Normal("mu")
    sigma = pm.HalfNormal("sigma")
    pm.Normal("Y", mu, sigma, observed=obs)

I sampled from the same data obs, simple N(0,1) - distributed.

Yet, model1 produced divergences and a ‘wobbly’ trace, while model2 was reasonable.

My questions:
-1 What happens under the hood when running model1, that is different from model2.

-2 What do I need to modify to make both models equivalent?

There’s no observed argument in model1, is that a typo? When I add it everything runs fine.

In general, it can be more difficult to sample from priors using MCMC than from posteriors. See this discussion for details.

1 Like