How to use the posterior distribution of one model as a prior distribution for another model

Hi @Simon, I guess you can do that sure. Just do not forget to transform the trace before passing to prior_from_idata. Note it is only necessary (and useful at all in the first place) if your data has long tails, otherwise you can just use prior_from_idata without any modification.

x = idatas[i].posterior['mu'].values  # assuming the data is log distributed
x[:] = (np.log(x - mu_loc) - mu_m)/mu_s

For the rest it looks fine to me. (provided prior is then transformed back to the log-normal distribution).

If your data is already normally-distributed though (as seems the case from the example you shared), you won’t have any advantage in doing that, as I think prior_from_idata already samples in way that is normalized (related to next paragraph). The function I wrote is only adding value for posteriors with long tails. And it paves the way to extending the approach to any distribution, provided the marginal can be appropriately transformed (and more critical: appropriately transformed back with pymc tensors !).

Something else: one thing I see in prior_from_idata is that it samples from a Normal distribution and dot-multiplies the samples with the cholesky factor, which might be better behaved than using MvNormal. That’s also what I am currently testing as I am having convergence issues (the issues I am experiencing may be unrelated, but I read elsewhere that sampling from IID variables can help to avoid “funnels”). I am talking about replacing the line:

mv = pm.MvNormal(label+'_mv', mu=np.zeros(len(names)), chol=chol, dims=dim)

with:

coparams = pm.Normal(label+'_iid', dims=dim)
mv = chol@coparams

(I’m still testing it)

PS: sorry for the many edits to whoeever read the reply immediately

1 Like