Hi,
I’m trying to change the standard deviation used for my output in a bayesian neural net based on the cluster that the input belongs to, using ADVI with mini batch. However I am a little puzzled on how I’d implement this.
I know which cluster each input belongs to, I then want to select a specific sd to use for the output based on this cluster, so that I can get different uncertainties depending on the cluster.
Currently my code looks something like this:
with pm.Model() as nn:
#weights and activation functions here
sds = []
for i in range(n_clusters):
sds.append(pm.HalfNormal('sd' + str(i), sd=1), shape=(1,2))
sds = np.array(sds)
labels = input_var[:, 2:]
output = pm.Normal('out', mu=act_out, sd=?, observed=target_var, total_size=10000, shape=(1,2))
This is where I’m stuck. Creating a mapping function, to map labels to sds, and then returning an array of corresponding standard deviations doesn’t work as you can’t pass an array of standard deviations, and since I don’t know the index, I can’t pinpoint which variable it should use.
Any help is greatly appreciated!