Is Beta(1,1) better behaved for NUTS than Uniform(0,1)?

I’ve read on here that in general using truncated distributions is not good for NUTS since it can run into divergences at the boundaries. If I have MvNormal mean parameters that I want to fit and which have to be between 0-1, would using Beta(1,1) as my uniform prior over 0-1 be better behaved than Uniform(0,1) directly? In practice the true values of my mean parameters for mock tests are not usually near the boundaries 0 and 1 and both Beta and Uniform are converging to the correct truth – but I have some divergences. I just started runs upping target_accept from the default 0.8 to 0.9 to encourage smaller adaptive step sizes and more thorough exploration.

1 Like

Uniform(0, 1) and Beta(1, 1) are exactly the same thing for NUTS. The boundary problem is not an issue because we automatically transform the variables so that proposals can be done on an unconstrained space by NUTS (unless you disabled transforms manually).

2 Likes

Thanks @ricardoV94 ! Is there a pedagogical reference / explanation for how you transform the variables to an unconstrained space and what that actually means?

Maybe the STAN manual is a good guide:

Everything they say applies to PyMC NUTS sampler

1 Like