As part of the PyMCon Web Series, we will be holding a special edition of PyMC office hours tomorrow with a specific focus on Gaussian processes. @bwengals will be there to answer questions. But we will also be joined by recent PyMCon Web Series speaker @DanhPhan who recently presented about Multi-Output Gaussian Processes . As a special guest, we will also have PyMC BDFL, @fonnesbeck . All three are well-versed in both GPs and PyMC. But if you have questions about Gaussian processes, do not miss this !
Registration is available on Meetup .
This special GP-focused edition of office hours will be held on March 22nd (or 23rd depending on your time zone) at the time listed below. Office hours will last about an hour, so don’t worry if you can’t make it at exactly this time!
UTC - 20:00 (March 22nd)
New York - 6pm (March 22nd)
Seattle - 3pm (March 22nd)
Sydney - 9am (March 23rd)
Tokyo - 7am (March 23rd)
Office hours will be held on Zoom.
Meeting link: Launch Meeting - Zoom
Meeting ID: 872 9690 4620
Passcode: 957697
Please note that participants are expected to abide by PyMC’s Code of Conduct .
2 Likes
Thanks for the responses during the QA @bwengals and @fonnesbeck! Using your advice, I got pyMC GP functionality to work with custom distance matrices - it was as simple as subclassing covariance kernels and getting them to use the precomputed covariance matrix instead of doing distance calculations yourself. It feels “hacky”, but it works when getting the distance function written as a kernel is difficult. Here’s the notebook with examples, in case you’re interested: stat_rethinking_2023/GPs-as-prior-for-parameter (islands & tools).ipynb at main · kamicollo/stat_rethinking_2023 · GitHub
While trying to replicate Statistical Rethinking examples fully, there was one other issue that I came across with that I was not able to resolve. How would I go about capturing the covariance matrix computed by the kernel in the trace? I thought simply doing this would work - but it does not; the trace contains an identity matrix instead.
with pm.Model():
...
cov_func = pm.gp.Exponential(input_dim=1, ls=ls)
gp = pm.gp.Latent(cov_func = cov_func)
gp.prior(X=myVariable)
pm.Deterministic(cov_func(myVariable))
....
How would I go about making the above example work? I know I can simply replicate the formula of the Exponential
kernel myself, but I wondered if there’s a better way (especially for situations where the kernel is more complex, e.g. additive).
Thank you!
Sorry for the super late reply. It looks like your example should work… if ls
is super small and myVariable
is equispaced, like np.linspace(0, 10, 100)
or something, you’ll get the identity matrix.
This worked for me:
import pymc as pm
import numpy as np
with pm.Model() as model:
ls = pm.Gamma("ls", mu=10, sigma=1)
cov = pm.gp.cov.Exponential(1, ls=ls)
X = np.arange(100)[:, None]
K = pm.Deterministic("K", cov(X))
tr = pm.sample()