Are you thinking about the data changing over time and concepts drifting away from the original training data? Or your changing of the model structure and wanting to gauge the changes?
If the former, I quite like doing coverage aka calibration checks on the PPC vs observations. Changes in coverage for holdout sets and/or new datasets over time can indicate drift.
If the latter, I’m not sure how that would look different to a normal model development workflow, since altering the model structure and joint posterior might reasonably have any number of subtle associated effects. I’d be interested to hear more ideas!