Adaptive surrogate models

I am very new to PyMC3 so it is possible this will either be a simple question to answer or something that is very difficult but here goes.

I have an expensive function (potentially non-python) that I would like to perform uncertainty quantification of the parameters. In order to initially optimize the parameters I am using scikit-optimize to perform Bayesian Optimization. This, at the end, yields the optimized parameters along with a Gaussian process model that, presumably, has the most accuracy in the region of the discovered optimum as it is where the highest density of samples was taken.

I would like to use the surrogate model with PyMC3 to perform sampling but allow it to sample my expensive function when needed to update the Gaussian process because the uncertainty has passed some threshold.

Is this possible?

I have taken a look at some similar questions but I would be glad for any further suggestions.

It should be possible to do in PyMC3. You can build the surrogate model (using Gaussian Process pm.gp) and use the surrogate model to perform Bayesian Optimization (instead of scikit-optimize). At the end of the optimization process, you will have the optimized parameters, the points (parameters) being evaluated and the output from the function. You can then get approximation of the function using the GPmariginal for any unevaluated input, and further refined the model when the output prediction has high uncertainty.

Okay, that makes sense. Are there any tutorials for performing Bayesian Optimization in PyMC3?

The Bayesian Optimization method in scikit-optimize returns the function value/parameter pairs so I suppose a process would be to use those points and then use pm.gp to build another model which could be evaluated using GPmarginal.

Does GPmarginal have methods to return what combination of parameters are the most uncertain and the most likely?

Thank you very much for your help.

There is this PR with long discuss you might find helpful. Optimization by ferrine · Pull Request #1953 · pymc-devs/pymc · GitHub

This is certainly possible, maybe even a bit easier.

You can do

with model:
    y_pred = gp.conditional("y_pred", new_para_combination, pred_noise=True)

Then using mu, var = gp.predict(X_new, point=mp, diag=True) to get the uncertainty. Here point=mp could be the GP related parameters return by scikit-optimize (see http://docs.pymc.io/notebooks/GP-Marginal.html#Using-.predict for more details).

Great! Thanks for taking the time to help me out. I’ll try out these suggestions and see where that takes me.

1 Like