I was wondering if anyone familiar with the PyMC3 codebase had opinions on integrating a Hamiltonian kernel into PyMC3’s Sequential Monte Carlo implementation. How much work do you think it would be?
Motivation: I haven’t had much luck using NUTS to sample heavily multimodal, high (> 1000)-dimensional posteriors. Luckily, the prior distribution is fairly smooth (possibly quite common in the real world), so I would happily use SMC if it weren’t for the inefficient Metropolis step. With a continuously-tuned HMC kernel it might be possible to get the best of both worlds.
Depending on your feedback I might try and have a go at implementing this. So far I’ve found this paper, which seems to support the hypothesis. Any pointers on how best to approach the problem would be greatly appreciated.