Updating priors and loss of information

I’m looking at this notebook updating priors and wondering if it’s possible to quantify the loss of information from converting posteriors to priors? Given the potential compute speed up from not having to retrain on all the original data, this is obviously an attractive idea.

Specifically, let’s say I train a model and then choose to update it at a certain constant frequency (say daily or weekly or monthly), after a month, a week or a year of multiple updates, is it possible to have an estimate of the errors in my estimates? is it possible to control those errors? is it possible that convergence could break?