Evaluating minibatch trained model on test set


I have the following model of which I have trained using pm.Minibatch on batches of size 256.

n_features = 150
n_competitors = 10

init_1 = np.random.randn(n_features, 1)
with pm.Model() as linear_model:
    w1 = pm.Normal('w1', 0, sd=1, shape=(n_features, 1), testval=init_1)
    w1_repeat = pm.math.block_diagonal([w1]*n_competitors)
    act_out = tt.nnet.softmax(tt.dot(X, w1_repeat))
    pm.Deterministic('p', act_out)
    out = pm.Categorical('out', act_out, observed=Y)

    approx = pm.fit(100000, 

In order to get the inference I tried the following, but the output is of size 256 (the length of the batch):

trace = approx.sample(draws=1000)
with linear_model:
    ppc = pm.sample_ppc(trace, samples=100)

I know this answer is supposed to help somehow but I can’t seem to figure it out. I also tried the following which doesn’t exactly give me an output:

with linear_model:
    test_probs = approx.sample_node(act_out, 
  1. Any thoughts on how to solve this?
  2. In the previous block when I tried ppc, I only had access to ppc['out'] and could not get access to ppc[‘p’]. I thought doing pm.Deterministic('p', act_out) would have given me access to the actual probabilities.

Thanks in advance,

  1. see the comment by @ferrine in this issue. Copied below:

To sample ppc after minibatch inference is the following:

  • Initialize model with shared data, not minibatch.
  • Create minibatches for inference using pm.Minibatch
    Use more_replacements argument in fit like approx = pm.fit(more_replacements={full_x: minibatch_x, full_y:minibatch_y}, ...)
  • After that you can get trace from approximation and pass it to sample_ppc.

Note that it will perform ppc for examples you defined in step 1, not minibatches. There you are free to set any data you like.

  1. You can not sample a Deterministic node, but there might be workaround for that to do draw_value from a Deterministic. We have a student working on approximate Bayesian computation as a GSOC project will probably implement this in near future.

Frequently Asked Questions