Hi,
I’m adjusting the example used in this link:
Here is the code (slightly modified to add biases):
with pm.Model() as BayesianNN:
ann_input = pm.Data('ann_input', X_train, mutable=True)
ann_output = pm.Data('ann_output', y_train, mutable=True)
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sigma=1,
shape=(X.shape[1], n_hidden),
initval=init_1)
# Bias for first hidden layer
bias_1 = pm.Normal('b_1', 0, sigma=1, shape=(n_hidden,), initval=np.zeros(n_hidden))
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sigma=1,
shape=(n_hidden, n_hidden),
initval=init_2)
# Bias for second hidden layer
bias_2 = pm.Normal('b_2', 0, sigma=1, shape=(n_hidden,), initval=np.zeros(n_hidden))
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sigma=1,
shape=(n_hidden,),
initval=init_out)
# Bias for output layer
bias_out = pm.Normal('b_out', 0, sigma=1, shape=(1,), initval=np.zeros(1))
# Build neural-network using tanh activation function
act_1 = pm.math.maximum(pm.math.dot(ann_input,
weights_in_1) + bias_1)
act_2 = pm.math.tanh(pm.math.dot(act_1,
weights_1_2) + bias_2)
act_out = pm.math.dot(act_2, weights_2_out) + bias_out
sigma = pm.HalfCauchy('sigma', beta=10, initval=1)
out = pm.Normal('out', mu=act_out, sigma=sigma, total_size=y_train.shape[0], observed=ann_output, shape=act_out.shape)
I want to swap out some of the tanh activation functions for some ones that are more suited for a regression task. I get this error when I use the pm.math.maximum to try to replicate the ReLU behaviour:
TypeError: Wrong number of inputs for maximum.make_node (got 1((,)), expected 2)
Does anyone know how to fix this?
Many thanks