Can MCMC Sampling be used in lieu of Cross Validation for model evaluation?

Currently I’m building a marketing mix model using Pymc module. My supervisor has raised a technical pymc related question and I am looking for concrete answers. The question is - “Usual practice in ML is to split data into train/test/validation set but in pymc bayesian setting, MCMC sampling is a part of model fitting which does the part of validation in some sense. In this setting, how important is to do a k-fold cross validations separately for model evaluation? Is MCMC sampling carried out in lieu of cross validation?”

Welcome!

My understanding is that this paper is currently the standard reference about how to do something like cross-validation with Bayesian models. This functionality is implemented in ArviZ, in particular in the model check plots as well as methods like waic() and loo().