Pymc3 3.7 memory leak

I was trying to drag out one of my old projects where I used a previous version of pymc3 and the following code crashes the session (on colab). Was wondering if this was a potential bug:

# ann_input = theano.shared(x_train)
# ann_output = theano.shared(y_train[:,None])
with pm.Model() as linear_model:
    ann_input = pm.Data('ann_input', x_train)
    ann_output = pm.Data('ann_output', y_train)
    # Weights from input to hidden layer
    weights_in_1 = pm.Normal('w_in_1', 0, sd=1, shape=(x_train.shape[1], n_out))

    mu = pm.math.dot(ann_input,weights_in_1)
    p = tt.nnet.softmax(mu)
    # multinomial logistic function
    y = pm.Categorical('y', p=p, observed=ann_output, total_size=len(y_train))

The commented out bit was what I used to use, and was hoping pm.Data would help but it didn’t.

In order to get the data, you can use the following lines:

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.reshape((len(x_train), -1))
x_test = x_test.reshape((len(x_test), -1))
n_out = 10 # 10 digits

Thoughts? Am I using the new API wrong. There’s only 7840 weights. Shouldn’t crash the system should it?