Using a GP to separate the contributing effects to height data

I hope I understood your problem correctly. From what I can collect, you’re trying to sample a marginal GP over the coordinates expanded over a range of samples or time-series (assuming the period goes over the 50 coordinates 50 times), this explains the slow sampling. If you want to infer the influence of the factors on z variable, then you may want to use a latent GP instead. I have drawn a lot of inspiration for this type of problem from this post: Modeling spatial data with Gaussian processes in PyMC - PyMC Labs . So, if I understood correctly, maybe this re-version of your model could help:

import pymc as pm
import numpy as np
import arviz as az
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import arviz as az

# toy dataset setup

np.random.seed(27)

npoints = 50
bound = 10

xcoords = np.linspace(-bound, bound, npoints)
ycoords = np.linspace(-bound, bound, npoints)

xmesh, ymesh = np.meshgrid(xcoords, ycoords)

# height data is a quadratic portion added to a periodic portion
zmesh = (xmesh**2 + ymesh**2)/bound + np.cos(2*np.pi*xmesh / (bound/5))
noise = np.random.random(zmesh.shape)*(.1*bound)

z = (zmesh+noise).flatten()

xy = np.c_[xmesh.flatten(), ymesh.flatten()]
z = (z - z.mean()) / z.std()

period_labels = np.array(["p"+str(p) for p in range(len(xcoords))])
period_idx = np.repeat([p for p in range(len(xcoords))], xy.shape[0]/len(xcoords))
samples_idx = np.arange(xy.shape[0])

coords = {"samples":samples_idx, "period":period_labels, "feature":["xcoor", "ycoor"]}

X = np.array([xcoords,ycoords]).T

with pm.Model(coords=coords) as model:
    t_idx = pm.ConstantData("period_idx", period_idx, dims="samples")
    Xs = pm.ConstantData("Xs", X, dims=("period", "feature"))
        
    l = pm.Gamma("l", mu=100, sigma=50)
    k = pm.gp.cov.Matern52(input_dim=2, ls=l)
    latent_s = pm.gp.Latent(cov_func=k,)
    gamma_z = latent_s.prior("gamma_z", Xs, dims="period")
    gamma_l = pm.Normal("gamma_l", 0, 1)
    gamma_s = pm.HalfNormal("gamma_s", 1)
    gamma = pm.Deterministic("gamma", gamma_l + gamma_s*gamma_z, dims="period") #spatial GP
    
    epsilon = pm.HalfNormal("epsilon", 1)
    
    y = pm.Normal("y", mu=gamma[t_idx], sigma=epsilon, observed=z, dims="samples") 
    
with model:
    idata = pm.sample(1000, tune=1000, cores=4, target_accept=0.9, random_seed=27)

summ = az.summary(idata, var_names=['l', "gamma_l", "epsilon"])
print(summ)

Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [l, gamma_z_rotated_, gamma_l, gamma_s, epsilon]
 |████████████| 100.00% [8000/8000 16:41<00:00 Sampling 4 chains, 0 divergences]Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 1023 seconds.

           mean     sd  hdi_3%  hdi_97%  ...  mcse_sd  ess_bulk  ess_tail  r_hat
l        23.355  5.580  13.520   34.120  ...    0.084    2209.0    2643.0    1.0
gamma_l   0.722  0.879  -0.873    2.448  ...    0.012    4077.0    2392.0    1.0
epsilon   0.723  0.010   0.705    0.744  ...    0.000    6613.0    2555.0    1.0

[3 rows x 9 columns]

Maybe you could expand this to some sort of LGCP model, or something that could allow you to separate contributions of temporal and spatial background processes. I hope it’s useful.

1 Like