Hi everyone, I’m a bit new to applied Bayesian statistics. I took Richard McElreath’s course, where he discusses “Full Luxury Bayes” in which you code multiple submodels in a single model.
I come from causal inference where a very common model is the Marginal Structural Model (aka IPW). Briefly, it’s a two-step model: where you first estimate the probabilities to get a treatment, take their inverse, and then fit a univariable regression of the outcome on the treatment weighted by the inverse-probability weights.
I thought coding the two regressions in a single
pm.Model() would be a nice use-case for a “Full Luxury” model.
Unfortunately, I was not able to recover the true parameter (treatment effect) even in the simplest case.
However, I don’t think it’s a bug, and I think there might be something more fundamental in work here (or maybe just how MCMC works?) - because when I do a simple weighted average (a single treatment model) and when I do a weighted regression with pre-computed fixed weights (i.e. a non-Bayesian logistic regression with a weighted Bayesian outcome regression) I get the correct answer.
Something fails when I try to jointly learn the treatment and outcome models.
I wonder if someone has more insight about Bayesian IPW here.
I tried my luck at Cross Validated (and the Bayesian inference Discord, but Thomas Wiecki suggested I’ll also try here) but to no avail. Hopefully more Bayesian experts are concentrated here .