Bound variables misbehaving

Bounding variables works as advertised, but is a bit of a hack. You’re constructing a normally-distributed variable, “telling” the sampler that the random variable could take on a whole range of values…and then declaring a whole range of values to be out-of-bounds. Sampling likes smooth surfaces (e.g., no sharp corners) and your bounds place a giant wall at zero. That’s likely the source of the divergences (though it would take a bit digging to confirm this). I might suggest using something other than a pm.normal(). Gamma or Weibull might be of use because their support is x \in [0, \infty], naturally bounding values to be non-negative.

1 Like