A few things:
- I posted too soon (sorry!) because when I increased the
tuneandtarget_acceptparameters, the sampling did converge wthout divergences. - The problem of model latent variables being unequal to real latent variables is an issue both when they are too large and when they are too small. I believe this is an active area of research, but I’m happy to have input.
- When your model has too few latent variables, then you are creating a multimodality in the posterior with modes at coverage of any subset of the true latent variables and more more modes at average values between latent variables.
- When the model has too many latent variables, then you get a different kind of multimodality where multiple model latent variables get collapsed onto one true latent variable.
- Since all of these issues involve multimodality in the posterior, I wonder if there has been working regarding covering different modes of the posterior with different chains.
Thanks in any case!
Opher