Sampling after minibatch training

I am trying to implement a neural network with minibatch advi. Inference works but I don’t understand how I can “turn off” minibatches and sample for my whole test set. I use the set_value method on the minibatch to swap training for test data but pm.sample only returns a sample of minibatch size.

i believe it is explained in the docstring:

    Suppose you need some replacements in the graph, e.g. change minibatch to testdata
    >>> node = x ** 2  # arbitrary expressions on minibatch `x`
    >>> testdata = pm.floatX(np.random.laplace(size=(1000, 10)))

    Then you should create a dict with replacements
    >>> replacements = {x: testdata}
    >>> rnode = theano.clone(node, replacements)
    >>> assert (testdata ** 2 == rnode.eval()).all()

    To replace minibatch with its shared variable you should do
    the same things. Minibatch variable is accessible as an attribute
    as well as shared, associated with minibatch
    >>> replacements = {x.minibatch: x.shared}
    >>> rnode = theano.clone(node, replacements)

See here (output in the last cell)

1 Like