General approach to convergence issues

Hi everyone,

I guess the most abstract way to formulate my question is, whether there are general ways to track down the reason of convergence issues in models with many parameters.

For a couple of weeks now I’m trying to fit a model that consists of multiple Gaussian processes and I’m becoming a bit desperate. The model is quite complicated, but an even more complicated version has been fitted using Stan and the results are reported in this study. I’m basically trying to replicate (aspects of) the study with PyMC. I don’t have access to the original code, but managed to implement a first simplified version in PyMC relatively straight-forward.

Unfortunately, the model doesn’t converge well. I tried multiple ways of reparametrization to the point where every random variable I specify is standard normal. I started with the direct GP implementation but due to performance changed to the HSGP and finally to linear interpolation. In the end, still several hundred of the parameters had large rhat values and with 2k samples the effective sample size was below ten for many parameters and around 400 for most.

Since nothing helped I now resorted to blindly changing the NUTS sampler parameters (e.g. raising target_accept, using bigger values for max_tree_depth etc.) but this became frustrating rather quickly…

I can share my model code and provide more details if necessary, or even open a separate question asking for advice on this specific model, but I’d rather learn how to approach issues like this on an abstract level to deepen my understanding of PyMC and Hamiltonian Monte Carlo in general. In the last weeks I read many articles on bad geometries, identifiability etc., but I don’t see how to narrow down the problematic aspects of my model, especially since it has apparently been working quite well in Stan…

I’m thankful for any advice, further articles on this matter etc.

Cheers,
Arthus

One programming-inspired tip is to strip down your model to the most basic building blocks (an intercept-only model will always sample fine) and incrementally add all of you model features (predictors, hierarchical strucure, multivariate priors, informative priors, etc.), one at a time.

This will allow you to pinpoint when things start to break and that can sometimes be enough to figure out the problem/solution

2 Likes

Thanks, I already did that to some extent (using synthetic data and modeling linear observations, excluding input errors etc.), but unfortunately the complications come in rather big steps…I guess I’ll need to resort to a more complex data generation process where I can directly access every component of the model and try to see when things break…I’ll get back to you once I see whether this helps.