Ok so traditionally in ML you would be using either a validation set (or use cross-validation) and a final testing set. The validation set would be used to optimize the hyper-parameters of the algorithm. In the bayesian setting we basically just have the priors hyper-parameters that we might want to tune. But what you are essentially saying is that we don’t really need a validation set here because the model won’t really overfit the data?
Are the stats we can calculate from our traces like pymc3.stats.dic, pymc3.stats.bpic, pymc3.stats.waic, pymc3.stats.loo, can be used to evaluate the performance a model on unseen data? I know that in the frequentist setting we can’t really rely on any performance statistics calculated on the training set. Is this similar also for bayesian?
Thanks,