How can I effectively propagate parameter uncertainties from one hierarchical level to the next?

Wow, thanks a lot for your very insightful reply @iavicenna. I understand that the former statement you mentioned is more of a classical hierarchical model which is also present in most of the documentation available. But, what you mentioned as the latter statement is EXACTLY what I want to do:

Building on your suggestion, the problem that is bothering me is that the distribution of m1 after the first level of inference need not look like a normal distribution. So I would not prefer to assume that. Additionally I feel that, using the posterior mean and sd in that assumed normal distribution would be camouflaging an informed prior in place of the actual distribution of m1. Instead I would like to somehow use the entire posterior of m1 as it is in the second level which would better propagate the uncertainty of m1 into the second level (my data is quite chaotic!).

What I can think of at this stage are the following:

  1. Using a pm.Deterministic() and passing on the sampled values of m1(from level 1) and using it in the second level.
  2. Using a pm.ConstantData() and passing on the sampled values of m1 and using it in the second level.

It would be very helpful if you can let me know what you think. Once again, thanks a lot for your reply :slight_smile:

PS: In case you would like to take a better look at the model, I had posted an earlier question (link below), albeit with an older version of pymc and theano.