With the following linear regression “toy” example I am trying to understand posterior predictive sampling:
k = 4 with pm.Model() as mdl: sigma = pm.HalfFlat('sigma',shape=1) beta = pm.Flat('beta',shape=(k,1)) mu = tt.dot(X, beta) yobs = pm.Normal('yobs', mu=mu, sigma=sigma, observed=y) trace = pm.sample(100, tune=500, target_accept=0.95, model=mdl) post_pred = pm.sample_posterior_predictive(trace, model=mdl)
y is a numpy vector with shape (41, 1)
X is a (41,4) numpy matrix
have a shape of (400,41,1)? I was expecting a shape of (400,1): one new y for each of the 400 posterior values in trace. Can the second dimension (41) be set to some other value? In other words, how can sample_posterior_predictive() be used to sample predictions from N(X_new.dot(beta_PP), sigmaPP) (with X_new: new input matrix; beta_PP, sigma_PP: posterior samples)?