Gaussian process using noisy data training points

Great! I guess I should mention that collapsing the posterior predictions into their means ignores the uncertainty reflected in your posterior. Also, I will mention that you can instead sum each prediction and then take the mean and that both the result and the interpretation will differ. If the sum of the \hat{y} is the outcome of interest, then you might more interested in summing each trajectory:

np.sum(pred_samples['f_pred'], axis=1)

Then you can have a mean and standard deviation of the predicted sums:

np.mean(np.sum(pred_samples['f_pred'], axis=1))
np.std(np.sum(pred_samples['f_pred'], axis=1))

Or you can grab percentiles or whatever you might be interested in. But at that point you will at least have some reflection of the uncertainty.

1 Like