Thoughts pipeline-style API for `sample()`

So as a pymc3 user, the initialization recipes in pymc3/sampling.py are really useful for getting started. But often I’ll want to customize / change a certain kwarg for one of those intermediate steps (i.e., the optimizer used for ADVI in init_advi, the minimizer for find_MAP) which aren’t exposed from the main sample(*args, **kwargs) API.

I’m then left trying to re-create some of those recipies in my main script so I can access those features, but that’s cumbersome and requires a bit of hunting around for the right inputs.

We could certainly expose additional workings to the main sample API, but I’m not sure that’s the best option. scikit-learn has a similar problem and has solved it really nicely with their Pipeline class, which allows you to chain together different methods while customizing their hyperparameters. I.e.,

Pipeline([('anova', anova_filter),
          ('svc', clf)])

What we’re really building in sample.py (and changing with the different init arguments) is an ‘inference pipeline’, with various initialization, approximation, and sampling methods.
It would be awesome if we could do something similar in pymc3, and build up our inference algorithm with something like

pm.Pipeline([
    ('map', pm.find_MAP(...)),
    ('advi', pm.AVDI(n=50000, optimizer=...)),
    ('adapt_diag', pm.some_fn()),
    ('hmc', pm.NUTS(...))

It would be 1) more customizable, instead of continuing to accumulate recipes in sample(init='...'), and 2) would allow you to go back and inspect, for instance, how closely the advi fit came to approximating your sampler.

Just a thought, I’m sure it would take a pretty concerted effort to re-engineer the API.

Thanks @pstjohn, I like the scikit-learn pipeline API - it’s great for standardizing analysis workflow. That being said, I am not a big fan of introducing the same concept in PyMC3. The reason is being: pm.sample() and pm.fit() is what we aim to archived as an “Inference button”. The goal is that user just write down the model and PyMC3 choose the optimal (that we can think of) approach to fit/sample from the model. Many of the option you mention (e.g., hand tunning the ‘adapt_diag’ method) is not supposed to expose to the users - you need to know what you are doing.

I think in general, my take is that if the default method doesnt work, beside some easy kwarg to try (e.g., increasing target_accept in NUTS), the instincts should be reparameterization.

1 Like

I can see sometimes in advanced cases when you may want specialized routines, like maybe using OPVI to initialize NUTS mass matrix, or find_MAP with a minimization routine that handles discrete parameters.

All of these routines can be used independently of sample of course, so it’s not too hard to pipeline your own routine. The implementation within sample is straightforward, so you can see how to do it for your unique case here.