I was wondering if anyone familiar with the PyMC3 codebase had opinions on integrating a Hamiltonian kernel into PyMC3’s Sequential Monte Carlo implementation. How much work do you think it would be?
Motivation: I haven’t had much luck using NUTS to sample heavily multimodal, high (> 1000)-dimensional posteriors. Luckily, the prior distribution is fairly smooth (possibly quite common in the real world), so I would happily use SMC if it weren’t for the inefficient Metropolis step. With a continuously-tuned HMC kernel it might be possible to get the best of both worlds.
Depending on your feedback I might try and have a go at implementing this. So far I’ve found this paper, which seems to support the hypothesis. Any pointers on how best to approach the problem would be greatly appreciated.
Before you go ahead and implement it, you might want to check whether SMC with HMC internal kernel actually work in your model. I wrote the SMC in TFP where you can choose different internal kernel (including HMC), which you could give it a try: https://github.com/tensorflow/probability/blob/master/tensorflow_probability/python/experimental/mcmc/examples/smc_demo.ipynb
Also see this example where I directly replicating what pymc3 is currently doing with the independent MH inner kernel for SMC.
Thanks very much for the links. At the moment I am trying to work out which parts of your notebooks are TF-specific and which can be reused with PyMC3. I might have a look at the PyMC3 SMS code next and see where the kernel comes into play.
Thanks again @junpenglao for pointing me in the right direction.