Accelerate the estimation by sequential process using pyMC?

Now, I have a high dimension black-box model, and r = f(\theta, \gamma) + \epsilon, \epsilon\sim N(0, \Sigma). Here, \theta is different for each r and the \gamma is the same for all r.

Now, i got 100 samples of r and corresponding \theta, and i want to calibrate the parameter \gamma, so I set the shape parameter of \theta to be 100, and then sample the posterior samples using the M-H sampler.

However, the sampling is very very very slow, even I have used a simulator for my black-box model. So is this because the dimension is high and 100+ parameters need to be estimated simultaneously?

I found that sequential process of this problem is easier, so i follow the [Updating Priors — PyMC example gallery (Updating Priors — PyMC example gallery) to build a sequential estimation. By doing this, the dimension is reduced and the sampling seems become faster.

My question is:

  1. Will this process useful? Will the results of batch process and sequential process be the same theoretically?

  2. And I found that the pyMC run many times likelihood for each sample, but by Metropolis–Hastings algorithm - Wikipedia, for each sample, the M-H only need to run 2 times the likelihood, so why?

I am very interested in this topic as well. Any papers out there that give information? I’m not sure the sequential and batch methods are theoretically equivalent.