Assuming you know a priori the exact covariance matrix \Sigma, you would then just need to plug this into a model using a multivariate Student-T distribution. In PyMC3 this is the pm.MvStudentT
distribution. I may not have the shapes quite right but your model would look something like:
covariance_matrix = # your fixed covariance matrix
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sigma=1, shape=(n_channels,))
nu = pm.Exponential('nu', 1 / (nu_mean - nu_min)) + nu_min
likelihood = pm.MvStudentT("modeled", mu=mu, nu=nu, cov=covariance_matrix)
Notice that std
has disappeared from the model. This is because this information is embedded in the (fixed) covariance matrix.
If you are looking to infer the variances, you could relax this model a bit by instead of fixing a covariance matrix, fixing a prescribed correlation matrix \Pi between the channels, defining std
as before, and then weighting the correlation matrix by std
to recover the covariance matrix. If v is you vector of standard deviations v = [v_1, \ldots, v_n] then this looks like
\Sigma = \text{diagonal}(v) \, \Pi \, \text{diagonal}(v)
where by \text{diagonal}(\cdot) I mean a zero matrix whose diagonal is filled with the input vector.
In code this is something like
import theano.tensor as tt
correlation = # your fixed correlation matrix
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sigma=1, shape=(n_channels,))
nu = pm.Exponential('nu', 1 / (nu_mean - nu_min)) + nu_min
std = pm.Uniform('std', lower=1e-1, upper=1e1, shape=(n_channels,))
std_mtx = tt.nlinalg.diag(std)
covariance_matrix = tt.nlinalg.matrix_dot(std_mtx, correlation, std_mtx)
likelihood = pm.MvStudentT("modeled", mu=mu, nu=nu, cov=covariance_matrix)
If you really wanted to, you could even take things one step forward and infer the entire covariance matrix. See this example on how to do this. But your problem may become too massive since as you mentioned, as soon as you add in the covariance you can no longer parallelize the problem.
Hope this helps!