How to mix known and unkwon information for Covariance Matrix?

Hey Chartl,

thank you very much for your very instructive post! I will keep in mind to use simulations in the future. If you don’t mind, I have 5 follow up questions regarding your post and for my better understanding of pymc3:

regarding your post:

1.) why do you use the log of the standard error? Is it numerically more stable?
2.) I don’t understand what you did with the z_treat and z_group. I think I lack the theoretical knowledge here. Any hint in which direction I should look into?

regarding PYMC3:

4.) when should I use pm.Deterministic? Still can’t quite get my head around that. What would happen if you would have omitted it?
3.) just for completeness, in my code example I used design matrices to construct my mu vector. I did it in this way because I didn’t know if that would have been possible with loops. Can I fill an array(tensor) with stochastic variables, maybe even from different distributions, i.e.: mu = [pm.Uniform(0, 2), pm.Normal(0, 10)…]. Do I need theano.scan for this?
5.) Can I use LKJcholeskyCov to estimate the offdiagonal elements of the covariance matrix ((NK, NK) tensor)? LKJcholeskyCov takes a distribution for the standard error. Can I define a custom distribution where sd is an 1D array with each entry defined by the model (vec( e_{ij}^2 + d_j^2 + g_i^2))?

Cheers,
Mister-Knister