@jessegrabowski thanks a lot for the quick reply and the link to the learning rate scheduler PR.
My thoughts on your two comments are below:
-
Yes, your absolutely right. This is something that has always bothered me a bit, I was hoping that for ADVI there would be a silver bullet that I just wasn’t aware of. Obviously not. Regarding the holdout data set. How would one evaluate on a holdout data set during an ADVI iteration? Is that even technically possible with the current API? Also this strikes me as a bit of a mismatch to the “Bayesian approach” where I use all the data to arrive at a posterior.
-
Thanks for mentioning the default convergence check, that is implied by option “advi+adapt_diag”
Setting aside the convergence issue. Assuming I would want to re-use the result from a previous fit when approximating the posterior (for the same model) for an enlarged data set. Is there any example that show how I can do this with pymc, i.e. use the parameters from a previous fit as the initial point for the next fit?
I found something here, How to save fitted ADVI Result? - #3 by junpenglao, is this still the recommended way to do this?
Assuming I am looking at a hierarchical model and I approximated the posterior with ADVI and now I am getting data on additional groups while the overall model structure stays the same. I’d like to re-use the approximation from my previous fit to inform the fit for the new model with additional groups, is that possible? Specifically if I have group-specific slopes with partial pooling, then I could re-use parameters for the existing groups. I probably can use the population slope to come up with a good initial guess for the parameters of the new groups so that the next fit is a lot faster?