Is the problem is that the model isn’t respecting the positivity constraint of the data? GPs are built from normal distributions, so by default it has no way to respect that constraint. You could:
- Use a likelihood function that’s strictly positive, such as
pm.LogNormal, and use the (exponential of the) GP as a prior to that likelihood. - Do some kind of transformation to your input data to put it on \mathbb R, model that transformed data, then apply the inverse transform to your model’s outputs. For example, if you’re working with prices, I’d model log returns instead of modeling prices directly.
I’d probably lean towards 2, but it might be by frequentist training speaking.