How to use the posterior distribution of one model as a prior distribution for another model

These approaches have proved quite useful for a case federated use I’m working in (where restrictions are very high, so no data sharing and no port openings are allowed, impeding anything like federated learning or sharing likelihoods [i.e. reconstruction risk], etc.). @perrette points regarding correlations are quite important, though for some cases where we’re interested only on estimates based one or two parameters which parametrise the sampling distribution (e.g. mu and sigma of y = pm.Normal(‘y’, mu, sigma, observed=obs)) I imagine lacking correlations is reasonable as long as the model/question are simple. What would be the most detrimental problem for inference in such cases?

The prior_from_idata from experimental comes quite handy in this regard as well. Though I’m a bit confused on how to report it. Would this be appropriate notation? (based on what is here: pymc_experimental.utils.prior — pymc_experimental 0.0.18 documentation):

π_{p}^{(s)} = θ_{p}^{(s-1)} + L_{p}^{(s-1)}B_{p}
α^{(s)}, β^{(s)}, σ^{(s)} = π_{p}^{(s)}
μ^{(s)} = α^{(s)} + β^{(s)}
y_{i}^{(s)} ~ N(μ^{(s)}, σ^{(s)})

Where π_{p}^{(s)} are the new priors, θ_{p}^{(s-1)} is the joint posterior mean taken from the joint posterior (p=1…3 posteriors from parameters: α^{(s-1)}, β^{(s-1)}, σ^{(s-1)}) obtained from previously sampled model (s-1), L_{p}^{(s-1)} is the Cholesky decomposition of the covariance matrix taken from the same joint posterior, and B_{p} is a base normal distribution with standard deviation equal to one and mean equal to matrix of zeros with same size the joint posterior mean.

A bit of a silly example, but it’s just to illustrate the question. Also, how can I cite PyMC experimental? Just cite the general PyMC reference? Many thanks.