Partial Observable Latent Variables

That’s a cool idea @jessegrabowski to use conditional there, I think that makes sense. Hopefully it’s not too slow, how much data are you dealing with? You can plug your alpha then into the likelihood. Whether a GP is stationary or not just depends on the covariance function you choose. There’s a covariance function thats equivalent to random walk (below). Its definitely more efficient though to not use the covariance function / GP parameterization. You could try using it with conditional, but I think it might actually be equivalent to your original solution @mgilbert?

Here’s an example:

import arviz as az
import pymc as pm
import pytensor.tensor as pt
import numpy as np
import matplotlib.pyplot as plt

class GRWCov(pm.gp.cov.Covariance):
    def __init__(self, input_dim=1, active_dims=None):
        super().__init__(input_dim=input_dim, active_dims=active_dims)

    def full(self, X, Xs=None):
        if Xs is None:
            Xs = X
        return pt.minimum(X, Xs.T)   

x = np.arange(100)
with pm.Model() as model:
    sigma = pm.HalfNormal("sigma", sigma=1) 
    cov_func = sigma**2 * GRWCov(input_dim=1)
    
    gp = pm.gp.Latent(cov_func=cov_func)
    f = gp.prior("f", X=x[:, None])
    
    prior = pm.sample_prior_predictive()

f = az.extract(prior, group="prior", var_names="f").data
plt.plot(x, f[:, [1,3,5]]);

Another option is maybe using a Bernstein polynomial basis? The coefficients for those are the y-values at the knot, so you could set a strong prior on those where you have the blue x’s.

1 Like