You can use `Marginal`

Gaussian Process Model from `pymc3.gp`

submodule.

```
import theano
import theano.tensor as tt
import pymc3 as pm
X = np.linspace(0, 1, num=5)[:, None] # Your dataset
y = np.exp(X.squeeze()) + np.random.randn(5) # Your observed data
Xnew = np.linspace(0, 1, num=10)[:, None] # Your new data points
with pm.Model() as model:
# Put priors on the lenght scale and amplitude
# of ExpQuad Kernel
ℓ = pm.Gamma("ℓ", alpha=1, beta=2)
η = pm.HalfCauchy("η", beta=3)
# Create a kernel to use in GP Model
cov = η**2 * pm.gp.cov.Matern52(1, ℓ)
gp = pm.gp.Marginal(cov_func=cov)
# Get the marginal likelihood P(f | X, y)
f = gp.marginal_likelihood('f', X=X, y=y, noise=1e-2, is_observed=False)
# Put a prior on Noise
σ = pm.HalfCauchy("σ", beta=5)
# Putting a distributions on y.
ypred = pm.Normal('ypred', mu=tt.exp(f), sigma=σ, observed=y)
# fstar is the conditional P(f* | f, X, y)
# Then, we put a distribution on the conditional
fstar = gp.conditional('fstar', Xnew=Xnew, given={'X': X, 'y': ypred, 'f': f})
ystar = pm.Normal('ystar', mu=tt.exp(fstar+1e-4), sigma=σ, shape=Xnew.shape[0])
# Sample!
trace = pm.sample(1000, init="advi", chains=1)
```

You can see other kernel functions at Mean and Covariance functions docs. If you have a large dataset, you can use MarginalSparse Model. Docs of MarginalSparse GP.