Adding two covariance matrices of different sizes

I am constructing a Gaussian Process regression model. However, the covariance function in my model is the sum of two different covariance matrices of different sizes. For example

l_eta = pm.Uniform('l_eta', lower=0, upper=10,shape=7)
l_delta = pm.Uniform('l_delta', lower=0, upper=10,shape=4)
sigma_eta = (1/lambda_eta) * pm.gp.cov.ExpQuad(input_dim=(p+q), lengthscales=l_eta)
sigma_delta = (1/lambda_delta) * pm.gp.cov.ExpQuad(input_dim=p, lengthscales=l_delta)


where sigma_eta is 387x387 and sigma_delta is 27x27. I would like to add sigma_delta to the left corner of sigma_eta to create the covariance function

cov = sigma_delta
cov[0:27,0:27] = cov[0:27,0:27]  + sigma_delta


However, I get TypeError: 'TensorVariable' object does not support item assignment

Therefore I tried filling sigma_delta with zeros (using theano.tensor.zeros and theano.tensor.concatenate) so that it is of the same shape as sigma_eta before the summation.
However, I get the error theano.tensor.var.AsTensorError: ('Cannot convert <pymc3.gp.cov.Prod object at 0x11cb69550> to TensorType', <class 'pymc3.gp.cov.Prod'>).

Therefore I would like to ask if there is a way to add two covariance matrix of different sizes in PyMC3 while in the pm.Model() block.

You need to use the set_subtensor function here. I think this is because you need to tell theano to add a new node in the computational graph that does this replacement.

Out of curiosity, what is the use case? Are you trying to transition the covariance function from one part of the domain to another?

Thank you for the recommendation. I will try that out and get back on the result.
I am actually trying to replicate the Gaussian Process model in this paper in PyMC3

If you look at page 9 of the paper, it requires summing covariance functions that produces matrices of different sizes. Since we are on this, I have a follow up question. PyMC3 provides the function ExpQuad(input_dim,lengthscales) so that it is easy to model the exponentiated quadratic covariance function. My question is, in higher dimensions of lengthscales how is the covariance computed. Is it
cov(x,x’) = exp(-\sum_{k=1}^p \frac{(x-x’)^2}{2\ell[k]^2})
where p is the number of dimensions of lengthscales

Oh ok, thanks for sharing! Always looking for more covariance functions to add.

Regarding your 2nd question, yes that’s right. This is true for all the stationary covariance functions. For example you can do:

with model:
el = pm.HalfCauchy(“el”, beta=[5, 5], shape=(2,))
cov = pm.gp.cov.ExpQuad(2, el)

Thank you. The covariance functions definitely made life easier. I especially appreciate the flexibility to use higher dimensions of lengthscales. As I just started using theano, I would like to ask if you could help check if I am implementing the set_subtensor function correctly and that it is doing what I am expecting it to do

import pymc3 as pm
import theano.tensor as tt

with pm.model() as model:
l_eta = pm.Uniform('l_eta', lower=0, upper=10,shape=7)
l_delta = pm.Uniform('l_delta', lower=0, upper=10,shape=4)
sigma_eta = (1/lambda_eta) * pm.gp.cov.ExpQuad(input_dim=(p+q), lengthscales=l_eta)
sigma_delta = (1/lambda_delta) * pm.gp.cov.ExpQuad(input_dim=p, lengthscales=l_delta)

# Here x1 is m x 7 matrix and x2 is n x 7 matrix where m > n
cov = sigma_eta(x1)
new_cov = tt.set_subtensor(cov[0:n,0:n],cov[0:n,0:n]+sigma_delta(x2))

y_obs = pm.MvNorma('y_obs', mu=0, cov=new_cov, observed=y)


Thank you in advance

That looks like it should do what you want to me. When I try it it looks like it does the right thing. Is it working for you? You can verify pretty easily by plotting cov and new_cov to see if the corner has changed.

K = cov(x).eval();  plt.imshow(K);
1 Like

Thanks! Its finally working.
I actually tried running the same code using pm.gp.GP. However, I got the following error:
ValueError: cov_func must be a subclass of Covariance. The only change was using pm.gp.GP instead of pm.MvNormal. I am trying to use pm.gp.GP instead of pm.MvNormal because it would be a lot easier to use pm.gp.sample_gp to draw realizations of the GP from the predictive distribution. Anyone has any recommendations or workarounds?

Great! It’s not working because the GP class takes the covariance function, not the covariance matrix, which is what you have after evaluating it at x. The reason is that sample_gp needs to re-evaluate the covariance function at the x values you want to predict at.

The best solution would be to write your own subclass of Covariance for this. See Linear for a good example. You would need to implement __init__ and full. If you have any questions or issues doing so, don’t hesitate to ask. In the meantime, I’ll see if we can add something to the library that would do the trick here.

Thank you Bill for your prompt response and for all the information. It has helped me greatly. I’ll work on implementing a new covariance class and post something here once I get it to work. Cheers