Behaviour of Gibbs kernel seems off

I’m not exactly sure what the problem is, but it seems like it might be coming from the warp function and not from the implementation of the Gibbs kernel?

One thing I tried was seeing if instead of exponentiating your samples if taking the inverse logistic would work, but this gave me the same problem. I have a sneaking suspicion it might be caused by exponentials amplifying errors somewhere.

While PyMC3’s pm.MvNormal will fail quite loudly if you try to pass a non positive semi-definite matrix as a covariance, I’ve found that NumPy’s np.random.multivariate_normal will still produce samples but will just warn you. I’m not sure if the mathematics breaks in a meaningful way This definitely breaks the mathematics but if I replace your last block of code with the following I’m able to draw samples:

plt.figure(figsize=(14, 4))
plt.plot(X, np.random.multivariate_normal(mean=np.zeros(K.shape[0]), cov=K + np.eye(1000)*1e-9, size=3).T)
plt.title("Samples from the GP prior")
plt.ylabel("y")
plt.xlabel("X");

Hope this helps!

2 Likes