I’m a newbie and I have a question about how to setup a problem - the" PyMC3 Way".
Let’s take a simple linear model as an example:
# From https://docs.pymc.io/notebooks/GLM-linear.html from pymc3 import * import numpy as np import matplotlib.pyplot as plt import theano np.random.seed(123) size = 200 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) # y = a + b*x true_regression_line = true_intercept + true_slope * x # add noise y = true_regression_line + np.random.normal(scale=.5, size=size) x_shared = theano.shared(x) with Model() as model: # model specifications in PyMC3 are wrapped in a with-statement # Define priors sigma = HalfCauchy('sigma', beta=10, testval=1.) intercept = Normal('Intercept', 0, sd=20) x_coeff = Normal('x', 0, sd=20) # Define likelihood likelihood = Normal('y', mu=intercept + x_coeff * x_shared, sd=sigma, observed=y) # Inference! trace = sample(3000, cores=2) # draw 3000 posterior samples using NUTS sampling
Now if I get new observations for
x, I know that I can infer about
y drawing samples with my model and trace:
x_shared.set_value(np.array([0.55])) ppc = sample_ppc(trace, model=model, samples=500) plt.boxplot(ppc['y']) # Ok, cool
Now lets say I want to use the trace to make an inference about
x, given some observation for
I’m not even sure how I would set this up. Do I setup a new model for this and use the existing trace or is there some clever Theano symbolic magic that I can use? Or, do I have to calc an inverse covariance matrix, use cholesky decomposition (and so forth) from scratch?
What’s the pymc3 way to approach this so that I can take advantage of the same trace (and maybe the same model) to use observations of
x to infer about
y and observations of
y to infer about
Thanks much for the help! I’ve enjoyed playing with the package.