Dear all,
After years of working with and teaching Gibbs Samplers in the context of discrete choice (i.e. multinomial logit/probit), I’ve decided to explore the world of Pymc, and for what I’ve seen I’m very impressed by its versatility.
I have a mixed logit (or hierarchical bayes) running with two random parameters with the following random parameter structure:
#RP 1:
mu_asc = pm.Normal(“mu_asc”, 0, 10)
sigma_asc= pm.InverseGamma(“sigma_asc”, alpha=3, beta=0.5)
asc_alt1 = pm.Normal(“asc_alt1”, mu=mu_asc, sigma=np.sqrt(sigma_asc),dims=(“individuals”))
#RP2:
mu_tc = pm.Normal(“mu_tc”, 0, 10)
sigma_tc= pm.InverseGamma(“sigma_tc”, alpha=3, beta=0.5)
beta_tc = pm.Normal(“beta_tc”, mu=mu_tc, sigma=np.sqrt(sigma_tc), dims=(“individuals”))
My intended extension is to estimate the correlation matrix between these. Normally, I’d use the inverseWishart but that is not supported using nuts samplers. The LKJ setup from the examples (corr across alternatives) looks suitable but I’m lost in two aspects regarding implementation:
-
dimensions: I’ve got dims individuals and dims random_vars, so for each individual 2 random parameters. What is the relevant dims to include in the pm.MvNormal()? and subsequently when referring to it in pm.Model?
rps = pm.MvNormal(“rp”, mu=mu_rp, chol=chol_rp, dims=“random_vars”)
u1 = rps[person_indx][0] + beta_tt * database[“tt1”] - pt.exp(rps[person_indx][1]) * database[“tc1”] -
I find the labelling of the pm.LKJCholeskyCov() slightly confusing. How do you properly setup the prior for the 2x2 cholesky matrix chol_rp referred to above in pm.MvNormal?
chol_rp, corr, stds = pm.LKJCholeskyCov(
“chol_rp”, n=2, eta=2.0, sd_dist=pm.HalfNormal.dist(10)
)
Many thanks for your feedback,
Thijs