Newton Method with Random Variables

But using PyMC3 doesn’t work as the methods cant find a solution by passing parameters as random variables (theano structures) to the optimizer.

Since the optimizer expects functions defined over Python numeric types or Numpy arrays, this isn’t surprising. It sounds like you want to run a PyMC3 model, get the results from the inference, and then pass it to the optimizer. If that’s the case, can you just use sampled values from pm.sample()? Do you need to perform optimization for every MCMC iteration?

If you do indeed need to do optimization within the main inference loop, I would also like to suggest writing out your objective function as a pm.Potential. If it’s differentiable and not too nasty to deal with, you may be able to directly optimize it by using the default No U Turn sampler. It’s important to remember that, in some sense, NUTS is performing a variant of gradient ascent itself, so it is perfectly capable of rendering solutions which optimize a target objective function which depends on other random variables.