Bonferroni correction for posteriors/ Post inference adjustments to Bayes Factors

I manage an experimentation platform that has succesfully gotten a Bayesian approach to experimentation in production.

One new feature we want to implement is to do inference for multiple metrics (e.g. conversion rate and revenue). I know that one way to solve this is to build a model to simultaneously infer paramenters for all metricss e.g. follow the approach of the revenue model from this case study.

However, this is impractical for our use as we have ~100s of metrics, and want to do inference on 5-6 simultaneously. Before I’m forced to build a piece of software to dynamically create a model, I’d like to know if there was any sort of post inference proceedure I could run that would be simpler to understand, like using the bonferroni correction for p-values.

I’ve read ‘A Bayesian Perspective on the Bonferroni Adjustment’ but not gotten very far with it. (A Bayesian Perspective on the Bonferroni Adjustment on JSTOR)

So, is there any other strategy I could use to adjust for multiple metrics post inference (i.e. fit each model independently, and then adjust)?

Have only skimmed the question very quickly so far but wanted to point out there is an updated version of that notebook available: Introduction to Bayesian A/B Testing — PyMC example gallery. Are you using pymc 3.x? Or more recent versions? It will be very helpful if you can add the category to the post.