Gaining intuitions for why sampling fails

Hi there!
Thanks for this thread, I’ll follow it closely because I have a similar experience :laughing:

Just from my point of view, what worked best in practice were prior predictive checks and regularizing priors – thinking about your priors and see their consequences in graphs. This can really make your model sample instead of crashing (especially with GLMs, where the link function distorts the relationship between the parameters and outcome spaces).
In your case though, the priors for sigma_alpha and sigma_beta look quite good. I’d maybe try reducing the lambda to 1 instead of 5, but maybe you already tried that? In any case, prior pred checks could be valuable here.

Another very good trick is to standardize the predictors (df['za']here). This often simplifies the geometry and really helps sampling as a result.
A final note would be to think hard about how your predictors interact (multicollinearity, confounding, etc.). I don’t have actionable advice here though, as I’m still trying to make sense of it myself and gain some understanding of how this affects sampling.

I know what I’m saying here is not revolutionary but hopefully that will help you or others :slight_smile:

All that being said, I also think some problems can’t be solved quickly: often this stuff is hard and you have to try and fail in order to understand and, in the end, gain some intuition. It can be frustrating and it definitely takes time! But I think it’s worth it – or at least hope so :stuck_out_tongue_winking_eye:

1 Like