Bounded variable logp

My model uses several bounded uniform distributions, which I’m representing with bounded normal distributions with large, uninformative standard deviations.

When I was debugging my model, I tried to verify that I was implementing these pseudo-uniform variables correctly by checking the logp of them. However, the logp of these variables falls off asymptotically as it approach the variable bounds.

I did some tests with simple models and they work just fine, so I must not understand how the bounded variables are being handled. Why does their logp fall off so drastically near their bounds?

Y = np.array([0])

model = pm.Model()
with model:
    alpha = pm.Bound(pm.Normal, lower=0, upper=10)('alpha', mu=0., sd=100.)
    logp = pm.Deterministic('logp', model.logpt)
    Y_obs = pm.Normal('Y_obs', mu=alpha, sd=1, observed=Y)

plt.xlabel('alpha')
plt.ylabel('logp')
plt.scatter([_['alpha'] for _ in trace],[_['lp'] for _ in trace]);

download

What do you mean? You mean much fewer samples at the bound?

It seems like the sampler is handling the bounds fine - I get plenty of samples near the bounds - but I don’t understand why the logp falls off close to the variable bounds (shown in the attached image).

I dont quite get what do you mean by “falls off”.

The logp becomes more negative close to the bounds. I would expect it to be flat through the entire interval.

Oh right I get what you mean now. That’s due to the auto-transformation of bounded variables in pymc3:

You can play around with it by turning off the transform kwarg (using uniform here as demonstration):

with pm.Model() as model:
#     alpha = pm.Bound(pm.Normal, lower=0, upper=10)('alpha', mu=0., sd=100., transform=None)
    alpha = pm.Uniform('alpha', 0, 10, transform=None)
logp = model.logp
x_ = np.linspace(0., 10., 1000)
plt.plot(x_, np.exp([logp(dict(alpha=x)) for x in x_]));

with
uniform
So no surprise here.

But that’s not the sampler is “seeing”, as we prefer to operate in the unbounded space which makes sampling and approximation much easier:

with pm.Model() as model:
#     alpha = pm.Bound(pm.Normal, lower=0, upper=10)('alpha', mu=0., sd=100.)
    alpha = pm.Uniform('alpha', 0, 10)
logp = model.logp
x_ = np.linspace(1e-10, 10.-1e-10, 1000) # avoid -inf and inf at the bound
x_2 = alpha.transformation.forward_val(x_)
plt.plot(x_2, np.exp([logp(dict(alpha_interval__=x)) for x in x_2]));

uniform%20in%20transformed%20space

Unfortuantely turning the above figure back to uniform is not that easy in PyMC3, as we dont have forward jacobian implemented. You can give it a try as exercise :wink:

Thanks for the answer, @junpenglao this is very helpful.