gp.Predict returns [m, m] array where m is the number of observations points?

Hi, I’ve got some (what I think) is strange behaviour when using gp.Predict with a marginal likelihood GP implementation.

  1. Calling gp.Predict on the training input (X_train in example) returns an [m, m] array where m is the number of observations, and each column is exactly the same. I would expect an [m, ] vector?

  2. There is a dimension mismatch error when calling Predict on an unseen set of inputs that is not either [m, n] (i.e. the size of the original training input), or [1, n] (i.e. just a single new coordinate). Again, the return of an [1, n] unseen input is a [1, m] vector of identical values. So, if I a have a test set of data that is [p, n] I have to put each p input into the predict function.

Is this as intended? I’ve been using PYMC for a good few years and I’m pretty sure something has gone wrong.

Example:

def mod(y):

    with pm.Model() as m:
        l = pm.HalfFlat('l', shape=[10,])
        s = pm.HalfFlat('s')
        sn = pm.Exponential('sn', 10)
        cf = s**2 * pm.gp.cov.ExpQuad(10, l)
        gp = pm.gp.Marginal(cov_func = cf)
        y_ = gp.marginal_likelihood('y_', X=X_train, y = y_train, sigma = sn)
        mp = pm.find_MAP()
    return(m, gp, mp)

m, gp, mp = mod(y_train)

with m:

    mu, sig = gp.predict(X_test, mp)

X_train.shape = [160, 10]
y_train.shape = [160, 1]
X_test.shape = [41, 10] and will produce an error “ValueError: Incompatible Elemwise input shapes [(41, 41), (41, 160)]”

Am I doing something wrong?