GP regularization effect

Thank you for the detailed answer Martin!

I started with this approach, but I had to change it pretty quickly as I was having negative estimations due to the flexibility of the GPs when fitting the data. I can’t really have that.

This was a copy-paste error, I am indeed always using counts. Thank you for noticing.

Do you think that this is a better alternative? I would say that it isn’t because I am actually fitting my model to some subset of the data to recover hyperparameters, fitting it again to a different dataset (that includes the subset but that it is nonetheless significantly different) to recover parameters, and then conditioning on new data. It seems a bit tricky.

I also did some digging here, but there is no current solution to do sparse approximations with latent GPs.

I am gonna check this reference.

Thank you!

1 Like