Yes however the follow code would not work:
for i in range(x.shape[0]):
v = x[i,:]
v = v[~np.isnan(v).any(axis=1)]
as they are numpy procedures and not theano. Do you know anyway to be able to efficiently get the likelihood for the values that are there but return a 0 for nans? This would deal with any different vector lengths and also get around list implementation
Edit: Getting the same results using Potential: ll = pm.Potential('likelihood', logp(px, drift, cov, inital_pos["CT"].mu, inital_pos["CT"].cov))
. The logp function no longer has the as_op decorator but still getting the same error.
Edit: cleared the cache, the potential method seems to be sampling albeit I get a type error sometimes (this has happened throughout a lot of my computations): TypeError: expected type_num 11 (NPY_FLOAT32) got 12
. I am using the cuda gpu to train it
Edit v4: Seems like removing floatx=float from theanorc fixed it. It is now sampling at around ~16 it/s for metropolis with the potential method, will try the pm.switch workaround + using NUTS/ADVI and see how they work.