BLUF: is there a reason that gp.prior() should not take a mutable data tensor as input for X ?

There is not much documentation on using gp.prior(), but it seems like you could use a mutable data object as input to the X values. However, I was getting a NotConstantValueError in pytensorf.py constant_fold. I changed cov.py _slice to disable this check.

def _slice(self, X, Xs=None):
xdims = X.shape[-1]
if isinstance(xdims, Variable):
[xdims] = constant_fold([xdims], raise_not_constant=False)
if self.input_dim != xdims:
warnings.warn(
f"Only {self.input_dim} column(s) out of {xdims} are"
" being used to compute the covariance function. If this"
" is not intended, increase âinput_dimâ parameter to"
" the number of columns to use. Ignore otherwise.",
UserWarning,
)
X = pt.as_tensor_variable(X[:, self.active_dims])
if Xs is not None:
Xs = pt.as_tensor_variable(Xs[:, self.active_dims])
return X, Xs

With HSGP it sure looks like I am able to update the X values with pm.set_dim() & pm.set_data() and get a good update (after accounting for aligning the means of the X)

I think the check is there just to provide that helper warning message. If youâre not changing the last dimension of X you could pass that info with x = pm.MutableData(..., shape=(None, 12)), for example, where with 12 your restrict yourself to always have that size for the last dimension and None allows the other dim to vary.

Then the constant_fold would work?

CC @bwengals, not sure if thereâs any reason to require static shape?

For HSGP you can use a X = pm.MutableData, but as described in the gp.prior_linearized docstring. For gp.prior and other GP implementations, I guess you maybe âcouldâ mutate the input data (I canât think of a reason to), but if youâre doing it to get predictions from the GP it wonât give you the right answer.

Is the wrong answer just because of the mean alignment?
Admittedly, I have only worked with your cherry blossom example, but when I input new year values extended out and just far enough back in time to maintain the mean yearâŚ I get (by the eyeball) very similar results.
from your git hub (using: phi, sqrt_psd = gp.prior_linearized(X) )

It could very well be quirk that this happened to work outâŚ just curious about what would lead to your intuition about getting the wrong answer from a projection using pm.set_data on a HSGP.
What I did instead:

x = df[âyearâ].values

with pm.Model() as model:

model.add_coord(âyearâ, df.year.tolist(),mutable=True)
eta = pm.Exponential(âetaâ, lam=0.25, shape=1)
ell_params = pm.find_constrained_prior(
pm.InverseGamma,
lower=20, upper=200,
init_guess={âalphaâ: 2, âbetaâ: 20},
mass=0.95,
)
ell = pm.InverseGamma(âellâ, **ell_params)
# Specify covariance function and GP
cov = eta[0]**2 * pm.gp.cov.Matern52(1, ls=ell)
intercept = pm.Normal(âinterceptâ, mu=df[âtempâ].mean(), sigma=2 * df[âtempâ].std())
gp = pm.gp.HSGP(m=[400], L=[1565.0], cov_func=cov, mean_func=pm.gp.mean.Constant(intercept))
x_mu = pm.ConstantData(âx_muâ,np.mean(x))
X = pm.MutableData(f"X_mu_{x_mu}", x[:, None])
X = X
f = gp.prior(âfâ, X=X, dims=âyearâ)
mu = pm.Deterministic(âmuâ, f, dims=âyearâ)
sigma = pm.HalfNormal(âsigmaâ, sigma=2 * df[âtempâ].std())
pm.Normal(âyâ, mu= f, sigma=sigma, observed=df[âtempâ].values, shape=X.shape[0 ])

Yeah, it looks like what youâre doing gave the right answer! The reason itâs not recommended is because you definitely canât do this with any other gp.prior non-HSGP implementation. In the code of HSGP.prior it calls HSGP.prior_linearized, so as long as you handle that scaling of X correctly itâll work out. Nice!