Hi everyone!
I am trying to set up a model that would give me independent GP priors on the columns of a matrix. The model I have so far is the following:
m=number of columns
[nan_mask]=missing data
with self.model as pmf:
lengthscale = list(2.0*np.ones((m,)))
cov = pm.gp.cov.ExpQuad(ls=lengthscale, input_dim=m, active_dims=list(range(0, m)))
# Add white noise to stabilise
cov += pm.gp.cov.WhiteNoise(1e-6)
# GP latent variable model, assume 0 mean
gp = pm.gp.Latent(cov_func=cov)
x = np.linspace(0, 1, m)[:, None]
for j in np.arange(1, m):
x2 = np.linspace(0, 1, m)[:, None]
x = np.hstack((x, x2))
F = gp.prior("F", X=x, shape=(m, m))
# Marginal Likelihood
R = pm.Normal("R", mu=F[~nan_mask], tau=self.alpha, observed=self.data[~nan_mask])
According to my understanding, using individual lengthscales on each active dimension in the covariance matrix, would ensure that I get an individual priors for all m input dimensions. The priors will be given by F. The way I modelled F is that F should contain m priors for m inputs of length m.
(using this homepage..
However, Is there a possibility to show that they are independent GP priors on each input x? (- The reason for my question is, that the result I get with this model is worse than expected, so I am thinking whether the model I use, despite converging, is not quite what I want to use…)
- Thanks in advance for any comments/suggestions .