How to reduce bias between posterior and true values?

Hi

I’m working on a model to calculate an activity of radioactive source. This topic is related with the previous one here.

As a matter of fact, I’m getting good performance from the model. However, the posterior value for the activity has a bias. For instance, the obtained value is 112, whereas the true value is 100 (see the plot). As can be seen from the summary, the 3% HDI is slightly more than the true value.


Are there any options to reduce this bias? How to “shift” the sampling frame (3% HDI and 97% HDI)?

Any advice would be highly appreciated.

Is this a systematic bias? If so, there may be an error in your model specification.

I’m defining a model for the “activity” parameter using a Gamma distribution like so

...
alpha_prior = pm.HalfNormal("alpha_prior", sigma=1)
beta_prior = pm.HalfNormal("beta_prior", sigma=1)
act = pm.Gamma("act", alpha=alpha_prior, beta=beta_prior)
...

I’ve also observed that when alpha and beta are defined manually with very small deviations, the results are generally more accurate. As an example, when alpha=10000 and beta=100, the posterior mean of the activity parameter is 100.43, which is very close to the true value of 100. This suggests that the underlying physical model I’m working with is correctly specified.

When you say the true value is 100, how do you know this?

I thought you were doing simulation with fake data.

1 Like

Also the priors are not enough to give you advice. In general you can simulate data with your model and then infer on it to see if it recovers the simulated parameters.

That will tell you whether inference works / assuming the model is the correct one (which it is because you used it to simulate data).

Whether the inference works for real data is something harder to know, because you don’t know how wrong your model is (it’s always > 0 wrong)

2 Likes

Speaking from limited experience and from doing the very thing that Ricardo mentions, I have found that many problem issues lie in the model specification and/or the specification of the priors.

Prior and posterior checks have been invaluable to me (and this forum).

1 Like

Correct. I’m using generic data to verify if the model works. That’s why I know what the target parameters should look like. I’ve discovered that changing priors significantly affects the results. As I understand it, there’s no solution to correct bias apart from adjusting the model or the priors, is that right?

If the outputs are highly sensitive to the prior it means the data are not informative about this parameter. This is likely an identification problem. There are no simple ways to diagnose these types on problems, because they are model specific. To paraphrase Tolstoy, every mis-specified model is mis-specified in its own way. Here is a blog by Michael Betancourt on the subject, but it’s not exactly brief.

Looking at pair-plots can be interesting in these situations. You might also consider mapping out the gradients of the likelihood function for a range of values for this parameter, setting other parameters to their posterior means. My expectation is that they will be close to zero.

1 Like

Thanks everybody for your explanations. This gave some food for thought. I will continue my investigations

1 Like

You have to be careful here. Even if you simulate with parameters \theta, the posterior mean is not going to be \theta. It will approach \theta as data goes to infinity under some mild conditions that include the parameters not growing with the data.

This is easy to understand with coins. The true probability of heads is 50%, but If I toss the coins 7 times, there’s no way I can get an estimate of 50%. This isn’t bias, it’s just variance. That is, if you toss the coins a bunch of time and look at their averages, they’ll be bunched around 50% and have a long-term average of 50%.

Or you can take a standard normal. I might draw three random numbers and observe 1.7, 2.9, and -0.3. I don’t get a sample mean of zero even though that was the value of the location parameter used to generate the data.

If you want to validate that you’re not getting bias, you can perform a bunch of simulations and fits and then use simulation-based calibration to measure if your sampler is calibrated.

2 Likes