I am trying to derive the aleatoric and epistemic uncertainty from a pymc3 model and I’m not sure how to do this.
For example given this model (taken from glm-linear):
import numpy as np import pymc3 as pm size = 200 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) true_regression_line = true_intercept + true_slope * x # add noise y = true_regression_line + np.random.normal(scale=.5, size=size) with pm.Model() as model: sigma = pm.HalfCauchy('sigma', beta=10, testval=1.) intercept = pm.Normal('Intercept', 0, sigma=20) x_coeff = pm.Normal('x', 0, sigma=20) mu_likelihood = intercept + x_coeff * x likelihood = pm.Normal('y', mu=mu_likelihood, sigma=sigma, observed=y) trace = pm.sample(3000, cores=2)
I would like to know the uncertainty over
mu_likelihood variable (epistemic uncertainty) as opposed to the uncertainty just over the
likelihood variable (aleatoric uncertainty). As an aside, the motivation for this is to split out the uncertainty that the model has about its own predictions from the uncertainty in the system that is being modelled.
I’ve seen these being done with TensorFlow Probability and it would be good to know if something similar can be done with the pymc3 framework.
Thanks for any help!