# Hyperpriors for a MvNormal prior

I’ve had a look around, but have not found a solution for this. I’m using a MvNormal as a prior for other parameters in a model. I’ve got this working where you manually specify the mu and cov, using `pm.MvNormal`.

``````mu = np.array([-1, 0.5])
cov = np.array([[1.0, -0.5], [-0.5, 2]])
VALS = pm.MvNormal("VALS", mu=mu, cov=cov, shape=(participants, 2))
``````

But I am not sure how you go about defining priors over mu and cov. One thread I read recommended the LKJ prior. I managed to get that working using something along these lines

``````m = pm.Normal("m", -1, 0.5)
c = pm.Normal("c", 0, 5)
# LKJ prior for covariance matrix
packed_L = pm.LKJCholeskyCov("packed_L", n=2, eta=2.0, sd_dist=pm.HalfCauchy.dist(2.5))
# convert to (2,2)
L = pm.expand_packed_triangular(2, packed_L)
VALS = pm.MvNormal("VALS", mu=[m, c], chol=L, shape=(participants, 2))
``````

Question 1: If you have a prior over the correlation coefficient (rho), is there a way of specifying that into `LKJCholeskyCov`?

Question 2: If not, is there a way forward using the following approach?

``````m = pm.Normal("m", -1, 0.5)
c = pm.Normal("c", 0, 5)
# set up priors for covariance
σm = 2  # could be a stochastic also
σc = 2  # could be a stochastic also
rho = pm.Beta("rho", 2, 5)  # prior over (0, 1)
ρ = pm.Deterministic("ρ", (rho * 2) - 1)  # transform to (-1, 1)
cov = np.array([[σm ^ 2, ρ * σm * σc],
[ρ * σm * σc, σc ^ 2]])
# Bivariate normal
VALS = pm.MvNormal("VALS", mu=[m, c], cov=cov, shape=(participants, 2))
``````

This code does not work because presumably the stochastic nodes are getting mashed when being entered into the numpy covariance array? This later way is my preferred as it’s clear to interpret, but is this approach fixable? I’ve tried omitting the `np.array()` from around the cov matrix, but that doesn’t seem to work either.

Any ideas?

You can definitely create a covariance matrix like that but you have to use theano constructors (don’t ask me which though ), but it will probably not sample very well.

You can have a rough (zero-centered) prior control on the correlations with the LKJCholkeyCov by adjusting the eta parameter, but I guess you are not happy with that?

Not not happy - just completely new to the LKJ so don’t have a good feel for how to change parameters to reflect different prior beliefs.

Priors for covariance matrix are a bit tricky because it has to obey several constraints specially with larger dimensions. The STAN manual might be helpful in here (as always): 1.13 Multivariate priors for hierarchical models | Stan User’s Guide

1 Like