Hey everyone,
I could use a bit of help from the community on a model convergence issue.
I’m working with a simple Negative Binomial (NB) regression. Everything runs smoothly with the pooled model, but when I switch to a hierarchical NB setup, convergence becomes unstable, Rhat values are consistently above 1.00.
I’ve done prior predictive checks and things seem okay, but my gut* tells me something might be off with how I’ve specified the hierarchical priors.
Does anyone see any red flags in my prior setup that could be messing with convergence?
Here’s a simplified model diagram and the relevant code. Happy to provide a full minimal working example with data if needed!
with pm.Model(coords=coords) as model:
# Set the data
subdivision_idx = pm.Data("subdivision_idx", data["subdivision_idx"].values, dims="survey")
obs = pm.Data("obs", data["n_items"].values, dims="survey")
# Priors
nu = pm.Normal("nu", np.log(150), 1.5) # <- I suppose the error here...
kappa = pm.HalfNormal("kappa", 0.2) # <- ... or here
log_mu = pm.Normal("log_mu", nu, kappa, dims="group")
mu = pm.Deterministic("mu", pt.tensor.exp(log_mu))
# Likelihood
observed = pm.NegativeBinomial(
"litter",
mu=mu[subdivision_idx],
alpha=1,
observed=obs,
)
Looking at the ESS, kappa
seems to be the problem here…
Thanks in advance! Appreciate any ideas or insights.
(*) For context, I ran into a similar issue recently when using a hierarchical prior for the dispersion parameter alpha
, and switching to modeling 1/alpha
instead helped a lot.