Currently, I’m considering to use GP regression for a my project. I typically use following script is to train a GP regression model with PyMC3.
X // features y // target with pm.Model() as model: ℓ = pm.Gamma("ℓ", alpha=2, beta=1) η = pm.HalfCauchy("η", beta=5) cov = η ** 2 * pm.gp.cov.Matern52(X.shape, ℓ) gp = pm.gp.Marginal(cov_func=cov) σ = pm.HalfCauchy("σ", beta=1) y = gp.marginal_likelihood("y", X=X, y=y, noise=σ) map_trace = [pm.find_MAP()]
I’m interested in understanding, how to perform automatic relevance determination for GP regression with PyMC3. With basic regression techniques, we could have used some hyper-priors for the coefficients of features, instead of deterministic values. However, with GP regression I don’t see a way of incorporating such hyper-priors to the kernel function.
Do you have any suggestions? I’m appreciate any help on performing automatic features selection within the models to minimize the effect of the unimportant features.