`conditional` and Deep Gaussian Processes

Saw the talk a couple of days ago, I’ll take another look. On a related note, the GP API looks a bit strange to me. It seems to almost make predicting on new points an afterthought. From my understanding gp.conditional needs to be called after pm.sample. Which would mean if we try to do anything involved with gps (say apply a response function to them, or split the output) we’d need to redefine the variables. For example:

gps=[]
latents=[]
with pymc.Model() as model:
for i in range(...):
    gp = pm.gp.Latent(mean_func=..., cov_func=...)
    gps.append(gp)
    f = gp.prior(f'f_{i}'...)
f = pytensor.stack(latents).T
another_f = pymc.math.exp(f)
y = pm.Dirichlet('y_obs', a=f, observed=...)
pm.sample()
...

To predict on new points we’d need quite some duplication

with model:
more_fs=[]
for gp in gps:
    ff = gp.conditional(f'f_star_{i}', Xnew)
    more_fs.append(ff)
f_star = pytensor.stack(more_fs).T
f_star_again = pymc.math.exp(f_star)
....

I know its a bit convoluted but that’s the best I can express it. For anything other that simple GPs there will be duplication. It’d be nice if we could just swap out the train inputs to make predictions the way it works for other models