Help navigating implementation of new inference algorithm

@junpenglao Thanks, this an amazing amount of work you’ve put into this! And PyMC stuff that would have taken me ages to figure out! I’m going through the notebook now and will see if I can get it fully working and matching the result of the Julia package, I think you’re like 90% of the way there. Will post back here and can also PR your notebook.

Noted about PyMC v4, that seems right, especially since I see a beta is out now. Re: INLA, thanks for pointing me to that, didn’t know about it. Took a quick glance, although I need more time to understand it fully. One first thing that stuck out is that it seems it requires the observed variables to be conditionally independent, whereas MUSE has no such requirement (granted it is the case for that toy funnel problem, but e.g. not for the “CMB lensing” real-world problem from the paper). On how well it works for non-Gaussian latent spaces, we showed that non-Gaussianity doesn’t the change the asymptotic unbiasedness of MUSE, it might just make it suboptimal. If by mixture you mean a multi-modal latent space, I’m not sure we’ve thought about that carefully enough though, but its an interesting question. I think there’s a sense in which you can imagine the estimate works with that, since intuitively a bunch of the MAP’s for the different sims that are part of the algorithm will fall into different maxima. So it seems it can kind of account for multi-modality, but I can’t say anything more quantiative than that.

1 Like