Model multivariate normal with separate means, dimension mismatch error


I want to model a multivariate (bivariate in the example) normal with separate means and a covariance matrix, somehow I just can’t get the dimensions to match. The model is relatively straight forward, here is the data generating process:

import numpy as np
import pymc3 as pm
import theano.tensor as tt

N = 200
# x1 and x2 are independent variables
x1 = np.random.randn(N)
x2 = np.random.randn(N)
α1, β1, α2, β2 = np.random.randn(4)

# the means of the joint distribution
μ1 = α1 + β1 * x1
μ2 = α2 + β2 * x2

# the covariance of the joint distribution
σ = 1, 2
Ω = np.array([[1, 0.5], [0.5, 1]])
Σ = np.diag(σ) @ Ω @ np.diag(σ)

# the error term 
ϵ = np.random.multivariate_normal(mean=(0, 0), cov=Σ, size=N)

# the dependent variable as bivariate normal
y = np.stack((μ1, μ2)).T + ϵ

Here is my model:

with pm.Model() as joint_model:
    a1 = pm.Normal('a1', 0, 1)
    b1 = pm.Normal('b1', 0, 1)
    a2 = pm.Normal('a2', 0, 1)
    b2 = pm.Normal('b2', 0, 1)

    mu1 = a1 + b1 * x1
    mu2 = a2 + b2 * x2

    sigma = pm.HalfCauchy('sigma', 1, shape=2)

    # build the covariance matrix
    C_triu = pm.LKJCorr('omega', n=2, p=2)
    C = tt.fill_diagonal(C_triu[np.zeros((2, 2), dtype=np.int64)], 1)
    sigma_diag = tt.nlinalg.diag(sigma)
    cov = tt.nlinalg.matrix_dot(sigma_diag, C, sigma_diag)

    joint_obs = pm.MvNormal('joint', mu=(mu1, mu2), cov=cov, observed=y)

The error message is ValueError: Input dimension mis-match. (input[0].shape[0] = 200, input[1].shape[0] = 2). I’m trying to mimic the data generating process in my model, but can’t really grasp how pm.MvNormal works, or more generally how the posterior is updated through this statement. I read a bit of theano tutorial but can’t relate to what I’m doing here, especially how to align mu, cov, and the observed y, what are their supposed shapes here? Am I allowed to simply write mu=(mu1, mu2) here?

Here is the (very long) error message, and the code above should be able to replicate the error:

Traceback (most recent call last):

  File "<ipython-input-4-b3658e712eaf>", line 18, in <module>
    joint_obs = pm.MvNormal('joint', mu=(mu1, mu2), cov=cov, observed=y)

  File "/Users/oma/anaconda/lib/python3.6/site-packages/pymc3/distributions/", line 39, in __new__
    return model.Var(name, dist, data, total_size)

  File "/Users/oma/anaconda/lib/python3.6/site-packages/pymc3/", line 545, in Var
    total_size=total_size, model=self)

  File "/Users/oma/anaconda/lib/python3.6/site-packages/pymc3/", line 970, in __init__
    self.logp_elemwiset = distribution.logp(data)

  File "/Users/oma/anaconda/lib/python3.6/site-packages/pymc3/distributions/", line 200, in logp
    delta = value - mu

  File "/Users/oma/anaconda/lib/python3.6/site-packages/theano/tensor/", line 147, in __sub__
    return theano.tensor.basic.sub(self, other)

  File "/Users/oma/anaconda/lib/python3.6/site-packages/theano/gof/", line 674, in __call__
    required = thunk()

  File "/Users/oma/anaconda/lib/python3.6/site-packages/theano/gof/", line 843, in rval

  File "/Users/oma/anaconda/lib/python3.6/site-packages/theano/gof/", line 1698, in __call__
    reraise(exc_type, exc_value, exc_trace)

  File "/Users/oma/anaconda/lib/python3.6/site-packages/", line 686, in reraise
    raise value

ValueError: Input dimension mis-match. (input[0].shape[0] = 200, input[1].shape[0] = 2)

This should do the trick:

mu = tt.stack([mu1, mu2]).T
joint_obs = pm.MvNormal('joint', mu=mu, cov=cov, observed=y)

You might also want to use pm.LKJCholeskyCov instead of pm.LKJCorr, it is usually way faster.

1 Like

Thanks it did!

I tried tt.stack and tt.concatenate but to no avail… The shape is so confusing to me!

I will look into pm.LKJCholeskyCov.

Edit: now looking back, I see it’s exactly the same expression in the data generating process y = np.stack((μ1, μ2)).T + ϵ, no idea why it never occurred to me to try it in the model…

1 Like