Is there a difference between a bounded normal and truncated normal?

Was reading through the section on bounded variables and it made me wonder if there is a difference between

x = pm.Bound(pm.Normal, lower=0.0)('x', mu=1.0, sigma=3.0)


x=pm.TruncatedNormal('x', mu=1.0, sigma=3.0, lower=0.0)

There does not seem to be. I guess pm.Bound can be applied more generally to different distributions. It made me wonder if there is any advantage to use one over the other, such as performance.

I have the same question.

I think they should be roughly the same. Note that the pm.Bound() functionality has changed in v4:

norm = pm.Normal.dist(mu=1.0, sigma=3.0)
x = pm.Bound('x', norm, lower=0.0)

The truncated normal has an additional normalization term so that it is a proper pdf and integrates to 1, the bounded does not.

This usually does not matter if the variables are unobserved and have no other unobserved ancestors, but can matter otherwise. That’s why you can use TruncatedNormal as a likelihood but not Bounded.


Also for this reason PyMC (v4) does not allow you to do prior or posterior predictive with bounded variables. In general you should use Truncated and we will likely deprecate Bound soon.

There is one tiny advantege to Bound in that when you don’t need the extra normalization term you don’t have to compute it, but most users are not aware of when this is the case so we prefer to not mention it in the future.

Note that you can obtain the same effect of Bound by passing an IntervalTransform to continuous distributions

from pymc.distributions.transforms import Interval

# These are equivalent
pm.Normal("x", transform=Interval(-1, None)
pm.Bound("y", pm.Normal.dist(), lower=-1)

Thanks for the explanation, very clear.

1 Like