How to use a TensorVariable in GP predict method

Hi Ominusliticus, any particular reason you want x_rv to be a pymc random variable? I think a bit more description about your goals would help me understand the problem.

If you just want to draw a new vector of x’s from the uniform distribution and push them through to generate predictions, I think the simplest approach is

with pm.Model() as m0:
    cov_func = pm.gp.cov.Matern32(1, ls=[10])
    gp = pm.gp.Marginal(cov_func=cov_func)
    f = gp.marginal_likelihood('f', X=x.reshape(-1, 1), y=y, sigma=1)

x_rv = np.random.uniform(low=-10,high=10,size=100)
x_new = x_rv.reshape(-1, 1)

with m0:
    pred = gp.predict(Xnew=x_new, diag=True)

If you really need to avoid stepping out the model context to generate your new samples, you could try this. It’s a bit awkward and I feel like it won’t be very fast but it works.

with pm.Model() as m0:
    cov_func = pm.gp.cov.Matern32(1, ls=[10])
    gp = pm.gp.Marginal(cov_func=cov_func)
    f = gp.marginal_likelihood('f', X=x.reshape(-1, 1), y=y, sigma=1)

    x_rv = pm.Uniform('x_rv', -10, 10) 
    x_new = pm.draw(x_rv,draws=100)[:,None]
    pred = gp.predict(Xnew=x_new, diag=True)

pm.draw() grabs samples from the uniform distribution, which seems to be what you’d want to pass over to predict.

1 Like