Bayesian estimation of DSGE models with PyMC

I have some challenges in applying the PyMC for the Bayesian estimation of the DSGE models (e.g., Smet-Wouters (SM) and Leeper-Plante-Traum DSGE models) with the Sequential Monte Carlo (SMC) algorithm. So, I need your help in this topic. Thank you so much!

I have a Python package working on this problem here. I will be pushing a major refactor that will change the estimation backend to PyMC soon, it’s just a lot of work and I haven’t made it a priority. In general, though, the problem is highly non trivial due to all the computation needed to solve the model for the policy function.

If you already have a code that implements and solves a model, though, I’d be happy to help you figure out that last couple steps. The basic steps are to:

  1. Write functions that take in parameter values and solve for the steady state and compute a linear approximation of the policy matrices via perturbation
  2. Wrap your solvers (steady state and perturbation) in a pytensor Op
  3. Wrap a Kalman filter in an Op that takes the policy matrices and data and returns the log-likelihood of the data (I also have pytensor kalman filters here, but since you will not be able to get gradients in step 1 anyway, it would be better to write them in numba for speed)
  4. Use the log-likelihood computed in (3) as a pm.Potential in a PyMC model.

You will not be able to use NUTS, but I have had good results with the SMC algorithm in PyMC. Note, however, that this is not the same SMC algorithm used in Dynare for computing the log-likelihood of higher-order linear approximations. The PyMC algorithm is sequential across MCMC draws, not across timesteps.