Hi Bowen, have you been able to test your covariance kernel on its own? I tried to test the full method with:
# sample data with time steps as T and random bernouli events as X
X = stats.binom(n=1,p=0.1).rvs(size=10)
Y = np.arange(0,10)
X = np.stack((Y,X))
X = X.T
# Priors
sig_g = pm.HalfCauchy.dist(beta=2.5)
l_rs = pm.Exponential.dist(1/5)
# Covariance functions
cov_rs = RS_KERNEL(sig_g, l_rs, 2, active_dims=[0,1])
pm.draw(cov_rs.full(X))
and ran into an error compiling the graph (hard to interpret the message). Similarly, I tried to just sample from the prior of the whole model and ran into a similar compilation error. These often arise from incompatible shapes or invalid operations.
with pm.Model() as model:
# Priors
sig_g = pm.HalfCauchy('sig_g', beta=2.5)
ALPHA = pm.Beta('ALPHA', alpha=1, beta=1)
l_se = pm.Exponential('l_se', 1/20)
l_rs = pm.Exponential('l_rs', 1/5)
sigma_private = pm.HalfCauchy('sigma_private', beta=2.5)
# Covariance functions
cov_se = sig_g**2 * pm.gp.cov.ExpQuad(2, l_se, active_dims=[0])
cov_rs = RS_KERNEL(sig_g, l_rs, 2, active_dims=[0,1])
cov_private_noise = pm.gp.cov.WhiteNoise(sigma_private)
# Combined covariance
K = ALPHA * cov_se + (1 - ALPHA) * cov_rs + cov_private_noise
# Gaussian Process
gp = pm.gp.Latent(cov_func=K)
y_ = gp.prior('y', X=X)
# Sampling
# trace = pm.sample(1000, chains=2, tune=1000, target_accept=0.9)
pm.draw(y_)
My goal was to try to figure out whether the for loop with the masks is behaving the way it is supposed to. But no luck until I reproduce the kernel locally.