My question is related to using a black-box external likelihood function.
What I would like to do is integrate PyMC3 into an asynchronous framework where the component drawing candidate parameters samples and the component evaluating the (log)-likelihood for these samples run in different Docker containers (and possibly even on different physical machines). Such a thing seems impossible to do as far as I can tell by reading the docs and looking at various tutorials and examples but perhaps you can prove me wrong.
I even had a look at the source code but the drawing of candidates and the likelihood evaluations always seem to be entangled in the same function (though I’m not 100% sure of that).
Here is the ideal API that would suit my need illustrated in pseudo-code:
In the scheduler process:
model = instantiate new PyMC3 model
samples_to_evaluate = model.ask() # this is what's possibly missing in the current API
serialize model
send samples_to_evaluate to task_queue
In a worker process:
get samples_to_evaluate from task_queue
evaluate log-likelihood
send result back to scheduler
In the scheduler process:
receive log-likelihood evaluations
deserialize model and trace
new_trace = model.tell(log-likelihood evaluations) # this is what's possibly missing in the current API
serialize trace
samples_to_evaluate = model.ask()
etc.
So, please tell me: is there any chance this is doable with the current PyMC3 API?