Glad I could help!
The PPC do look quite bad. It looks like your model only predicts type1 or type2, and even then it can’t really decide between the two. It’s hard to speak specifically without looking at the model, but I think what I’d do is a mix of:
- Prior predictive plots on the outcome scale (what you did above, but just with the priors, without any data. This should help you see if there is something wrong in your parametrization, that impedes the model from updating when it gets data).
- Test with simpler models first: intercept-only, one intercept, two intercepts, etc. and see whether it changes predictions. Maybe you just have too many predictors and this is confounding inference? Model comparison should help you with that.
- Thinking generatively about your model to select predictors: how can the data happen? What is the process at play and which predictors are potentially relevant to this process? This is realted to confounding.
- If you think class unbalance is a problem, balance your dataset and sample from your model with it. My bet would be it’s not, since your Multinomial likelihood encodes knowledge about the number of trials in each category, so the model will takes this into account and be more uncertain for type 3 than for type 1 for instance. Plus, there should be a big bias in favor of type 1 in your last plot if class imbalance were a big problem.
- After all these checks, try to extend to a hierarchical model to pool information across categoris and shrink parameters as a result.
Hope this helps 