Bound variables misbehaving

I am trying to make a callable function where a user can define the parameters of a distribution. I want to prevent any values from going below zero as there is a tt.log operation inside this function.

I was therefore going to use the pm.Bound method to do:

bound_N = pm.Bound(pm.Normal,lower=0)

There are two issues:

  1. The simple bound_N above gives divergencies for any parameters i.e
    bound_N('N',mu=3,sd=2) unless target_accept is raised to i.e 0.9
  2. The result of this bounded normal is completely off if the mean (mu) is set to a large number. For example using:

bound_N('N',mu=1e5,sd=20) gives the below graph. (having mu=1e4 seems to work)

Any explanation to this?

Bounding variables works as advertised, but is a bit of a hack. You’re constructing a normally-distributed variable, “telling” the sampler that the random variable could take on a whole range of values…and then declaring a whole range of values to be out-of-bounds. Sampling likes smooth surfaces (e.g., no sharp corners) and your bounds place a giant wall at zero. That’s likely the source of the divergences (though it would take a bit digging to confirm this). I might suggest using something other than a pm.normal(). Gamma or Weibull might be of use because their support is x \in [0, \infty], naturally bounding values to be non-negative.

1 Like