Feature discussion: Grid estimation with PyMC?

Sometimes when working with simple models (1-4 parameters), I think it would be cool to have the option to do grid estimation (e.g., to debug / compare with MCMC in case of divergences).

Obviously it is not very difficult to write your own grid estimator, but it would be interesting if we could use the PyMC syntax to set up the model as usual and just call pm.grid(points=dict()) or something similar. You would just need to specify the points in the grid that you are interested in computing. Here is a pseudo-example of how this could look like:

with pm.Model() as m:
    b0 = pm.Normal('b0', mu=0, sigma=1)
    b1 = pm.Normal('b1', mu=0, sigma=1)
    sigma = pm.HalfNormal('sigma', sigma=10)
    like = pm.Normal('like', mu=b0 + x@b1, sigma=sigma, observed=y)

    pm.grid({'b0': np.linspace(-5, 5, 100), 
             'b1': np.linspace(-5, 5, 100);
             'sigma':np.linspace(0.001, 50, 50)})

This would make it trivial to do grid sampling for people familiar with the PyMC syntax (and take advantage of the myriad of custom built-in distributions), and one would always be a single line away from being able to run MCMC with the exact same model.

What do you think?

Hi Ricardo,
Interesting idea. Given the numerous limitations of grid sampling, I think this feature wouldn’t be aimed at practical purposes but could be useful for educational purposes – understanding, explaining and showing what sampling means.

In my industry projects I’ve personally never had the need for grid sampling. My thinking is: if you can do MCMC sampling in seconds, why use grid sampling?
But my experience can’t be representative of everyone else’s, so Im curious about what people think :slight_smile:

1 Like

Thanks for your input!

I definitely agree with your points. I think grid estimation is a great way to get familiar with Bayesian modelling, even if its limitations make it impractical for most real world problems. Conceptually, it is perhaps the most intuitive Bayesian estimation algorithm, and it also makes it obvious when and why we need MCMC.

while this is likely not a priority for us as a new feature, i agree that for educational purpose it is quite useful. You can do it right now like: https://github.com/junpenglao/All-that-likelihood-with-PyMC3/blob/master/Notebooks/Normal_mixture_logp.ipynb


Thanks for sharing that notebook. That’s exactly what I had in mind, but with a simple interface for the user. I would be willing to give it a try if the devs think it is a worthwhile idea.