Posterior sample from an approximation with Minibatches?

Yes I was getting around to it that it’s seeing those shifts as being fixed.

So, I thought I had it figured out with this:

batches = Minibatch(fulldata, 50) 

with pm.Model():
  # an informed prior
  mu = pm.Normal('shiftmu', np.mean(fulldata), 10)
  sigma = pm.HalfCauchy('sigma', 2.5)
  cov = 2. * pm.gp.cov.RatQuad(1, 0.2, 0.1)
  gp = pm.gp.Latent(cov_func=cov)
  f = gp.prior('f', X=X)
  shifts = pm.Deterministic('shifts', tt.mean(batches-f.T, axis=1))
  pm.Normal('infer', mu, sigma, observed = shifts, total_size= n_subjects)
  approximation = pm.fit()

Which would have then allowed me to sample the posterior for the shift of selected record(s) with the same method as previously

sample_shifts = approximation.sample_node(shifts,  more_replacements={batches: fulldata[:128,:]}, size=100)
posteriorshifts = sample_shifts.eval()

This time around, taking into account that, as you said, the priors here weigh heavily on the posterior, the informed unbiassed prior strongly improves the inference of the hyperparameters.

However no matter what data I use in the posterior sampling, the results correlate strongly from subset to subset, but not at all with their actual values.

So I’m deducing that even in this setup the shifts are seen as being somehow fixed, which really makes no sense since they are now the observed variable.

It can’t be that the replacement is not taking place, otherwise it would still randomly mini-batch and results would not correlate between calls with different data.

I’m confused. Could it be the Deterministic that ADVI (still) expects to have fixed values?