Improve NUT sampling rate in Gaussian Process when using custom Covariance

I am trying to implement Hierarchical Gaussian Process based on GPy implementation here. The implementation works, but fitting to the data (NUTS Sampling) is a hit-or-miss i.e. Some data I can sample within a few minutes, but others it took hours, even days to finish. How can I improve the performance

I think this looks like a perfect situation to try out the new HSGP approximation. I made a gist here implementing it for a similar data set.

There are a few differences between this and the GPy implementation, but the biggest one is there’s no need to form a big blocked covariance matrix, we can implement the hierarchical GP directly. Sampling should be much much faster too. If you have a lot of GP replicates (f), there’s a better implementation now that there is a sparse matrix dot product (credit @ricardoV94 and @tcapretto). I’ll add that to the gist soon.

Just to make sure, how many GP replicates do you have, and how big is each GP? And are your GPs 1 or 2 dimensional?

3 Likes

Thanks! for the reply!
I’m not familiar with HSGP, and I think I have to read up on them first. The data I am trying to model don’t require too many replicates (from 3 to 10). And each one will have about 100 to 200 datapoints