Bad r-hat on intercept only

Sure here is a version that samples 8000 groups (out of 156.000) and runs the model with pymc Google Colab I use that to test different settings, and the full data set is rather slow to run.

I initially got the values for sigma from bambi, I don’t know how bambi derives suitable priors. One thing I know is that bambi used a non-centered parametrisation, but my pymc code is using a centered parametrisation (I think centered should be fine since there is rather much data).