Hi,
I would like to achieve something akin to when in frequentist analysis one explores many models in a ridge or lasso solution path, each model pretty similar to the next one so that the optimizer may be warm started with the previous solution, and out-of-sample losses/scores for each model are computed using resampling, typically CV.
So in a bayesian world:
-
Is there any way to slightly perturbate priors (e.g. slightly change their variances) so that I can assume the MCMC sampler has almost converged and in less than usual iterations will be there?
-
k-fold CV would be too expensive, so I guess I could keep some kind of running LOO that changes as the sampler converges to a new posterior, perhaps in a smooth way in-between.
Thanks!