Posterior to prior elicitation

In Preliz, the prior elicitation package, there is a function for “posterior to prior” elicitation: Predictive Elicitation — PreliZ 0.12.0 documentation (last section of the page). I couldn’t find any reference for this approach and I’m not sure I understand the logic.

Does anyone know where it comes from or can anyone point me to a presentation of the ideas?

Thanks in advance!
Opher

This method fits the posterior distribution derived from the model to align with its prior distribution by maximizing the likelihood of each posterior marginal with respect to the prior.

My reading of the above is the following. Usually you set some priors, you get the posteriors and from there you can evaluate the likelihood of the data under your model. Generally speaking, the more likely your data under your model, the better your model is at explaining that data. Setting different priors will yield different posteriors and thus different likelihoods, hence you could decide to mess around with the priors to find those that yield the best likelihood.

Note that since you are using the data (via the likelihood evaluation) to find your priors you are essentially working backwards, hence the posterior to prior idea: the prior are no longer priors but based on observations! This is a little bit circular and overfitting prone, I think that’s why it’s presented as an “experimental method”, it’s not meant for ultimate model design, but to experiment and answer questions like “what’s the best possible fit I could get if I decided to ‘cheat’?”