Use GP latent variable as input to another function, shape mismatch

I am trying the create a dirichlet density regression where the density weight is a function of a latent GP. I can get the model to work without the GP but was hoping for a little more flexibility. When I multiply the gp.prior output to the weights I get a shape mismatch error. It seems like ‘f’ is 100x100 where X is 100,1

K = 20
with pm.Model() as model:

cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)
gp = pm.gp.Latent(cov_func=cov_func)
f = gp.prior("f", X=X, shape=(100,1))

alpha = pm.Normal('alpha', 0., 1., shape=K)
beta = pm.Normal('beta', 0., 5., shape=K)

v = norm_cdf(alpha + beta * f) #ERROR HERE
w = pm.Deterministic('w', stick_breaking(v))

ValueError: Input dimension mis-match. (input[0].shape[0] = 20, input[1].shape[0] = 100)

Not sure how to reshape or broadcast this ‘f’ guy.

Thanks for pointing this out! There seems to be a bug in the default mean_func used: Zero. It allocates zeros only using the first dimension of X.shape instead of the full shape. That’s why you end up with a (100, 100) instead of a (100, 1) shaped array. Could you post this as an issue in GitHub so we can reference it for a fix? In the end, using column vectors and then broadcasting on the last axis may also be affected by this issue, which we still haven’t fixed.

For now, could you try supplying the mean_func=tt.zeros_like and see if that fixes your problem?

OK will do, thanks.

Using mean_func=tt.zeros_like seems to have gotten the dimensions to be correct but they still aren’t broadcasting correctly. It is still trying to dot product the two matrices rather than elementwise. I do not get an error until I start sampling as opposed to declaring the model as before. edit: Not sure if this is related to the column vector broadcasting issue you mention. Can I reshape the vector in some way?

Issue submitted here

1 Like