Awesome talk, so many gems, sketch over notebooks! The topic is fascinating. I am not a stats person but I had always pictured estimation as always trying to capture reality as best as possible. But this is using priors to encode how we want our model to be/behave (v pertinent atm) but I think this use of priors is only for when we have an algorithm that is making decisions for users and not so much for informing users to make decisions? I can imagine circumstances where constraining for fairness in estimation could hide it in the data? We do need these ‘nobs’ and other tools for quantitative reasoning about what we want and what a good decision is though. Would it be possible to get your model to include an interesting predictor, like investment in different fairness initiatives, highlight, rank etc the area’s of most unfairness and then give us advice on the best bang for buck interventions against all those, for example? Or is there another good use case/mode I’m ignoring?