Improve NUT sampling rate in Gaussian Process when using custom Covariance

I think this looks like a perfect situation to try out the new HSGP approximation. I made a gist here implementing it for a similar data set.

There are a few differences between this and the GPy implementation, but the biggest one is there’s no need to form a big blocked covariance matrix, we can implement the hierarchical GP directly. Sampling should be much much faster too. If you have a lot of GP replicates (f), there’s a better implementation now that there is a sparse matrix dot product (credit @ricardoV94 and @tcapretto). I’ll add that to the gist soon.

Just to make sure, how many GP replicates do you have, and how big is each GP? And are your GPs 1 or 2 dimensional?

3 Likes