Bounded variable logp

Oh right I get what you mean now. That’s due to the auto-transformation of bounded variables in pymc3:

You can play around with it by turning off the transform kwarg (using uniform here as demonstration):

with pm.Model() as model:
#     alpha = pm.Bound(pm.Normal, lower=0, upper=10)('alpha', mu=0., sd=100., transform=None)
    alpha = pm.Uniform('alpha', 0, 10, transform=None)
logp = model.logp
x_ = np.linspace(0., 10., 1000)
plt.plot(x_, np.exp([logp(dict(alpha=x)) for x in x_]));

with
uniform
So no surprise here.

But that’s not the sampler is “seeing”, as we prefer to operate in the unbounded space which makes sampling and approximation much easier:

with pm.Model() as model:
#     alpha = pm.Bound(pm.Normal, lower=0, upper=10)('alpha', mu=0., sd=100.)
    alpha = pm.Uniform('alpha', 0, 10)
logp = model.logp
x_ = np.linspace(1e-10, 10.-1e-10, 1000) # avoid -inf and inf at the bound
x_2 = alpha.transformation.forward_val(x_)
plt.plot(x_2, np.exp([logp(dict(alpha_interval__=x)) for x in x_2]));

uniform%20in%20transformed%20space

Unfortuantely turning the above figure back to uniform is not that easy in PyMC3, as we dont have forward jacobian implemented. You can give it a try as exercise :wink: