I’ve been searching for similar topics on my issue, could only find this.
I’m trying to sample the classification probabilities from the posterior for a set of test data, and using this I’m interested in measuring prediction uncertainty. So far I am only able to sample from the pm.Categorical output, however these class estimates have limited info that can be used to measure uncertainty.
For what it’s worth, here’s how I define my model:
def build_ann(init):
with pm.Model() as model:
network = lasagne.layers.InputLayer(shape=(None, 1, 28, 28),
input_var=input_var)
network = lasagne.layers.Conv2DLayer(
network, num_filters=32, filter_size=(5, 5),
nonlinearity=lasagne.nonlinearities.tanh,
W=init)
network = lasagne.layers.MaxPool2DLayer(network, pool_size=(2, 2))
network = lasagne.layers.Conv2DLayer(
network, num_filters=32, filter_size=(5, 5),
nonlinearity=lasagne.nonlinearities.tanh,
W=init)
network = lasagne.layers.MaxPool2DLayer(network, pool_size=(2, 2))
n_hid2 = 256
network = lasagne.layers.DenseLayer(
network, num_units=n_hid2,
nonlinearity=lasagne.nonlinearities.tanh,
b=init,
W=init)
network = lasagne.layers.DenseLayer(
network, num_units=10,
nonlinearity=lasagne.nonlinearities.softmax,
b=init,
W=init
)
prediction = lasagne.layers.get_output(network)
out = pm.Categorical('out',
prediction,
observed=target_var,total_size=int(y_train.shape[0]))
return model
Any help on this would be greatly appreciated, as my current research depends on these results.