Are my posterior predictive samples biased if I observe Y?

I’m trying to wrap my head around what you mean by “Unbiased” because that term has a specific meaning in statistics. If you’re asking whether LOO will tend to overestimate fit, it won’t. I suspect you’re somewhat confusing LOO with the jackknife, and looking for jackknife-debiased estimates. If that’s the case, taking the mean of the posterior predictive does not return an estimate for the population mean that is unbiased in the formal statistical sense. This is good, though. There’s a bias-variance tradeoff, and using the mean of the posterior predictive will automatically optimize it to give you the prediction with the minimum mean squared error.* Outside of intro stats classes where unbiased estimates are the norm, no conversation has ever gone like this –
“I have a problem – my estimates are all wildly wrong in different directions.”
“Ok, but do you know which direction they’re wrong in?”
“Nope.”
“Then it’s variance and not bias, so it’s OK.”

If you really want unbiased estimates, you can get those by setting flat priors on everything, but this means you’ll be giving up the reduced variance from being able to set reasonable priors.

*Note that it minimizes the mean squared error assuming your priors are in some sense “Correct.” If you put in a really dumb prior, you’ll get really dumb point estimates. There’s no way to get a free lunch here – any prior that reduces the variance has to risk introducing bias. A flat prior minimizes the worst-case scenario, but will have a MSE higher than any prior that concentrates probability in an area that’s even remotely close to the real parameter value.