Decreasing rhat for Black-Box Likelihood Model

I can try to use more informative priors (like normal distribution with not-so-small standard deviation, which can be tricky to guess), but I’m hoping to avoid biasing the posterior too much with (very) informative priors. There’s such a risk, right?

The probabilistic model attempts to “adjust” parameters of an external model, from which I can’t get analytical derivatives, by using data observations. Maybe another aggravating fact about my case is that I have very few observations (like 8). The PyMC model computes a log-likelihood that attempts to measure how good the output of the external model is when compared to the observed value (well, actually I have a specified distribution for the observed value, so I essentially use its PDF with some modifications for my purposes). See Sampler recommendation for black-box likelihood and Observed Variable as a Range for some background of why I’m doing what I’m doing. :slight_smile: