I guess the most abstract way to formulate my question is, whether there are general ways to track down the reason of convergence issues in models with many parameters.
For a couple of weeks now I’m trying to fit a model that consists of multiple Gaussian processes and I’m becoming a bit desperate. The model is quite complicated, but an even more complicated version has been fitted using Stan and the results are reported in this study. I’m basically trying to replicate (aspects of) the study with PyMC. I don’t have access to the original code, but managed to implement a first simplified version in PyMC relatively straight-forward.
Unfortunately, the model doesn’t converge well. I tried multiple ways of reparametrization to the point where every random variable I specify is standard normal. I started with the direct GP implementation but due to performance changed to the HSGP and finally to linear interpolation. In the end, still several hundred of the parameters had large rhat values and with 2k samples the effective sample size was below ten for many parameters and around 400 for most.
Since nothing helped I now resorted to blindly changing the NUTS sampler parameters (e.g. raising
target_accept, using bigger values for
max_tree_depth etc.) but this became frustrating rather quickly…
I can share my model code and provide more details if necessary, or even open a separate question asking for advice on this specific model, but I’d rather learn how to approach issues like this on an abstract level to deepen my understanding of PyMC and Hamiltonian Monte Carlo in general. In the last weeks I read many articles on bad geometries, identifiability etc., but I don’t see how to narrow down the problematic aspects of my model, especially since it has apparently been working quite well in Stan…
I’m thankful for any advice, further articles on this matter etc.