Linear model where predictor also being sampled?

I’m trying to implement what seems like a fairly simple model, and am having some issues.

I’m pretty new to pymc3 – any help would be really appreciated!



Split into groups, indexed by i:

  • a_i: shape (n_i, 2)
  • b_i: shape (m_i, 2)
  • y_i


Linear model with:

y_i \sim N(\alpha + \beta x_i, \sigma)

Where x_i comes from the euclidean distance between samples of two multivariate normal distributions:

x_i = ||a_i - b_i||

a_i \sim MN(\mu_{ai}, cov_{ai})

b_i \sim MN(\mu_{bi}, cov_{bi})

Using uninformative priors.


I think what needs to be programmed, is that for every sampling step:

  • values are drawn from the two multivariate normal distributions, which have a_i and b_i as observed data, respectively.
  • x_i is computed from the values that are drawn.
  • N(y_i | \alpha + \beta x_i, \sigma) is computed.

What I have tried to implement won’t do this. For one thing the function mvNormal does not return a sample from the instance of pm.MvNormal. But these pm.MvNormal instances do not have a .random() method because they have observed data.

It seems like this would be possible if \mu_{ai}, cov_{ai}, \mu_{bi}, and cov_{bi} were estimated in a separate step, but that there should be a way of combining it all into a single model.

Does this model make sense? And if so, how should it be implemented?

Attempted implementation

import pymc3 as pm
import numpy as np

def mvNormal(observed, pre):
    """Model observed data with a multivariate normal distribution.

        data (np.array): Shape [n, 2].
        pre (str): Prefix for variable names.

    mu = pm.Normal(pre + 'mu', mu=np.zeros(2), sd=np.ones(2) * 10, shape=(2,))
    sd_dist = pm.HalfCauchy.dist(beta=2.5, shape=(2,))
    chol_packed = pm.LKJCholeskyCov(pre + 'cp', n=2, eta=2, sd_dist=sd_dist)
    chol = pm.expand_packed_triangular(2, chol_packed)
    return pm.MvNormal(pre + 'vals', mu=mu, chol=chol, observed=observed)

def distsMvNormals(a, b, pre):
    """Distribution of distances between samples from 2 multivariate normal
        a (np.array): Observed data. Shape [n, 2].
        b (np.array): Observed data. Shape [m, 2].
        pre (str): Prefix for variable names.
        Euclidean distance between samples of a and b.
    var_a = mvNormal(a, pre + 'mvn-a-')
    var_b = mvNormal(b, pre + 'mvn-b-')
    return pm.Deterministic(pre + 'dist', (var_a - var_b).norm(L=2))

with pm.Model() as model:
    n, m = 50, 50
    k = 3
    x = [
        distsMvNormals(np.random.randn(n, 2), np.random.randn(m, 2), "a-"),
        distsMvNormals(np.random.randn(n, 2), np.random.randn(m, 2), "b-"),
        distsMvNormals(np.random.randn(n, 2), np.random.randn(m, 2), "c-")
    alpha = pm.Normal('alpha', mu=0, sd=10)
    beta = pm.Normal('beta', mu=0, sd=10)
    sigma = pm.HalfNormal('sigma', sd=10)
    y = pm.Normal('y', mu=alpha + beta * x, sd=sigma, observed=np.random.randn(k))
    pm.sample(100)  # Low n. draws for testing
1 Like

I dont think this could be done straightforwardly, as you are trying to generate random sample from one of the node in the Bayesian model and use the sample as observed. A similar discussion come to mind which you can have a look: Call MvNormal.random() method inside model

In your case, since a_i and b_i are observed, which means x_i are observed as well (no stochastic), which means the model becomes 3 separate one: y_i ~ N( , ), a_i ~ MN(), and b_i ~ MN().
What I think you should do, is x_i = |\mu_{ai}-\mu_{bi}|, or x_i = |a_i'-b_i'| with a_i \sim N(a_i', sd) (sd could be a small value). Now that all the unknowns are latent, you can do whatever computation with them within a model and assign observation, and just let the sampler take care of the sampling part. A similar idea see the first part of

1 Like