GP conditional runs into LinAlgError depending on the length and value of Xnew

I know it sounds crazy, but I have this 2D toy model that runs into LinAlgError on the posterior predictive all the time.

The priors / MAP of the model may be crap, but the behavior is really weird anyways. I will see to post a minimum example tomorrow.

Let this be the x_dense coordinates we are interested in:

[[ 1.  0.]
 [ 2.  0.]
 [ 3.  0.]
 [ 4.  0.]
 [ 5.  0.]
 [ 6.  0.]
 [ 7.  0.]
 [ 8.  0.]
 [ 9.  0.]
 [10.  0.]
 [11.  0.]
 [12.  0.]
 [13.  0.]
 [14.  0.]
 [15.  0.]
 [16.  0.]
 [17.  0.]
 [18.  0.]
 [19.  0.]
 [20.  0.]]

With the following code block I’m now changing the slices of x_dense that I pass to gp.conditional. Results below.

with pmodel:
    name = "x_" + str(numpy.random.randint(300))
    
    gp.conditional(name, x_dense[   here goes the slice  ])
    
    mapp = pymc3.sample_posterior_predictive(
        [map_]*500,
        var_names=[name],
        random_seed=None
    )
values in x_dense length result
1,2,3,…17 17 ok
11,12,13,…20 10 ok
1,2,3,… 18 18 LinAlgError: Matrix is not positive definite
2,4,6,8,…40 20 ok

What is going on here?

The MAP:

{'ls_log__': array([1.37336708, 1.35685668]),
 'scaling_log__': array(0.02942309),
 'f_rotated_': array([-7.20280197e-01,  8.41126876e-01,  1.10100355e+00,  7.06059847e-01,
         2.85239838e-01,  8.83073170e-02,  2.36590226e-02,  2.51939297e-03,
        -6.82158557e-01,  7.17695047e-02,  2.60334158e-01,  1.74976402e-01,
         8.29174538e-02,  3.42669797e-02,  8.01378521e-03,  3.58981782e-05,
        -4.77951398e-01, -1.97249009e-01, -8.20534635e-02, -4.10153437e-02,
        -2.51902453e-02, -1.57064530e-02, -8.63831120e-03, -4.10985233e-03,
        -1.94405281e-01, -1.10577551e-01, -6.75448527e-02, -4.59307815e-02,
        -3.08400784e-02, -1.87370135e-02, -1.06407581e-02, -4.84606050e-03,
        -4.37792231e-02, -2.70468577e-02, -2.03970688e-02, -1.49711851e-02,
        -1.08103664e-02, -7.79686435e-03, -5.01426713e-03, -2.41684652e-03,
        -4.54103131e-03, -3.37221765e-03, -2.64724119e-03, -2.18499400e-03,
        -1.86947370e-03, -1.49076209e-03, -1.01487449e-03, -5.15779913e-04]),
 'sigma_log__': array(2.04416487),
 'ls': array([3.94862367, 3.88396554]),
 'scaling': array(1.02986023),
 'f': array([-0.74178828, -0.49821589, -0.13761005,  0.29271285,  0.72464196,
         1.08677275,  1.3236247 ,  1.40950335, -0.89348184, -0.64721528,
        -0.27187337,  0.18313515,  0.64557755,  1.0390158 ,  1.30337567,
         1.40989796, -1.02368644, -0.79302759, -0.42747921,  0.02463935,
         0.49128769,  0.89534669,  1.17536666,  1.30078919, -1.11323482,
        -0.91423068, -0.58161583, -0.15959206,  0.28428714,  0.67668206,
         0.95819353,  1.09799547, -1.14677477, -0.99004016, -0.7078451 ,
        -0.33795226,  0.06009561,  0.4205491 ,  0.68913211,  0.83632756,
        -1.11706588, -1.00612816, -0.78350574, -0.47926274, -0.14271285,
         0.17057041,  0.41369215,  0.55983278]),
 'sigma': array(7.72270638)}

The model:

with pymc3.Model() as pmodel:
    ls = pymc3.Lognormal('ls', mu=numpy.log(4), sd=0.1, shape=(2,))
    scaling = pymc3.Lognormal('scaling', mu=numpy.log(1), sd=0.1)

    mean_func = pymc3.gp.mean.Zero()
    cov_func = scaling**2 * pymc3.gp.cov.ExpQuad(
        input_dim=2,
        ls=ls
    )
    gp = pymc3.gp.Latent(mean_func=mean_func, cov_func=cov_func)
    
    f = gp.prior('f', X=X)
    
    # inform observation noise prior from reference data
    sigma = pymc3.Lognormal('sigma', mu=numpy.log(0.5))
    
    # set up likelihood          
    L = pymc3.Normal(
        'L',
        mu=f, sigma=sigma,
        observed=Y - numpy.mean(Y)
    )

I’m getting this error also on a notebook that ran just fine in February 2020 (probably with PyMC3 v3.8).

Here’s a Jupyter notebook with an example: https://discourse.pymc.io/t/gp-conditional-runs-into-linalgerror-depending-on-the-length-and-value-of-xnew/7583 · GitHub

Try adding a small amount of diagonal white noise to your covariance function. You can either do that explicitly using the WhiteNoise kernel or wrap your cov_func with pm.gp.util.stabilize, which adds 1e-6 diagonal white noise.

Basically the matrix inversions necessary for GP predictions are numerically unstable, so ensuring the diagonal isn’t too close to zero helps.

1 Like

Thanks @BioGoertz !
The pm.gp.util.stabilize seems to be broken, but scaling**2 * ExpQuad() + WhiteNoise(1e-6) worked.

It is still a strange regression, but it looks like this trick will at least allow us to move forward with this model.