Latent orthogonal factors

I have given variables x and y but I also want to estimate 2 latent factors that are orthogonal. Despite me giving the covariance matrix as eye matrix, I get correlated fhats.

This might be an identifiability problem, but I am still surprised at the .99 posterior corr I get.

S = np.eye(2) * 2

with pm.Model() as model:
  a = pm.Normal('a', 0, 1)
  b = pm.Normal('b', 0, 1)
  sigma = pm.Gamma('sigma', 1, 1)
  fhat = pm.MvNormal('fhat', np.zeros(2), cov=S, shape=(n, 2))
  mu = a + b * x + fhat[:, 0] + fhat[:, 1] # pm.math.sum(fhat, axis=1)
  pred = pm.Normal('pred', mu, sigma, observed=y) 
  trace = pm.sample() 

Hi,

I’m not sure I can help you achieve what you want, but I think I can maybe shed a bit of light on why the current approach isn’t working.

When you specify

  fhat = pm.MvNormal('fhat', np.zeros(2), cov=S, shape=(n, 2))

you are specifying a prior distribution only. They’re uncorrelated in the prior, but that doesn’t mean that they will be uncorrelated in the posterior. As you already hinted, the two latent variables you chose are completely non-identified: they have the same prior distribution (uncorrelated normal with the same variance), and they enter into mu in exactly the same way. So it’s no surprise, I think, that they will be extremely highly correlated, and I would have thought the direction would be negative: when one is big, the other is small, but I may be mistaken about that.

I hope that gives you a bit of a clue about why they’re correlated. Unfortunately I’m not entirely sure how to enforce the constraint you’re looking for, maybe someone else could help. One thing you could try if your latent variables are one-dimensional is to enforce an ordering, maybe like here. That won’t stop them from being correlated, but they’ll at least be identified then.