Thanks! I’ll try to implement this and report back.
I’m relatively new to both pymc3 and Bayesian methods in general. Could you elaborate on how the method above, something along the lines of:
#Define Forward model (wrapped through theano)
with model:
R, A, M = PDD_forward.forward(z_obs,
f_s_prior,
C_prior,
f_r_prior,
grad_a,
A_m_prior)
#Observation matrix (Mx3)
mu_obs = np.array([R_obs, A_obs, M_obs]).T
#"Prediction" matrix (Mx3)
mu_pred = tt.transpose(tt.stack([R, A, M]))
#Unclear how to formulate covariance mat
individual_variances = Exponential(..., shape=3)
cov = tt.eye(3) * individual_variances
#likelihood function
vals = pm.MvNormal('vals', mu=mu_pred, cov=cov, observed=mu_obs)
differs from something like:
with model:
R, A, M = PDD_forward.forward(z_obs,
f_s_prior,
C_prior,
f_r_prior,
grad_a,
A_m_prior)
R_sigma = pm.HalfCauchy("R_sigma", ...)
A_sigma = pm.HalfCauchy("A_sigma", ...)
M_sigma = pm.HalfCauchy("M_sigma", ...)
#Observation matrix (Mx3)
mu_obs = np.array([R_obs, A_obs, M_obs]).T
#"Prediction" matrix (Mx3)
mu_pred = tt.transpose(tt.stack([R, A, M]))
#matrix of variances
sigmas = tt.transpose(tt.stack([R_sigma, A_sigma, M_sigma]))
#likelihood function
vals = pm.Normal("vals", mu= mu_pred, sigma=sigmas, observed=mu_obs)
The first code block uses a MvNormal for the likelihood, whereas the second just uses Normal but both seem to take an M\times3 matrix for mu and observed. From some early testing, they appear to produce similar results.
Appreciate the help!