I’m not sure I understand what this is supposed to mean.
Here is a model:
with pm.Model() as linear_model:
# Intercept
intercept = pm.Normal('Intercept', mu = 0, sd = 10)
# Slope
slope1 = pm.Normal('slope1', mu = 0, sd = 10)
slope2 = pm.Normal('slope2', mu = 0, sd = 10)
slope3 = pm.Normal('slope3', mu = 0, sd = 10)
# Standard deviation
sigma = pm.HalfNormal('sigma', sd = 10)
# Estimate of mean
mean = (intercept +
slope1 * X_train[:,0] +
slope2 * X_train[:,1] +
slope3 * X_train[:,2])
# Observed values
Y_obs = pm.Normal('Y_obs',
mu = mean,
sd = sigma,
observed = y_train)
# Posterior distribution
linear_trace = pm.sample()
Now you can now use X_test make predictions about y_test. But how you do this will strongly depend on what exactly you are looking for. Presumably, you are interested in the predictions associated with the individual observations in the test set.
# which observation from the test
# set do we want to inspect?
obs_index = 0
ppc = (linear_trace['Intercept'] +
(linear_trace['slope1'] * X_test[obs_index,0]) +
(linear_trace['slope2'] * X_test[obs_index,1]) +
(linear_trace['slope3'] * X_test[obs_index,2]) )
The variable ppc will contain a full posterior distribution over values of y for the relevant observation (i.e., the value found in y_test[obs_index]). What you do from there depends on your needs.