Evaluating log likelihood using estimates from find_MAP results

Thanks, Junpeng. I took a look at Bob Carpenter’s case study that you provided. If you don’t mind, I’d like to ask some questions about how best to translate the moral of that study into PyMC3.

In this model, I am regressing latent variables onto observed covariates. For example, I use sex, weight, and kidney function to measure a parameter called CL. Becuase CL is only physically understood when it is positive, I model it as follows CL \sim \exp(\mathbf{x}\beta + z\sigma ).

I do something like this…

    #Parameter for the intercept
    log_CL = pm.Normal("log_CL", tt.log(3.3), 0.25)

    # Coefficients for the regression
    betas_CL = pm.Normal('betas_CL', mu=0, sigma=0.05, shape=X.shape[1])

    # Use to estimate random effects per patient
    z_CL = pm.Normal("z_CL", 0, 1, shape=len(np.unique(subject_ids)))

    # Population level noise for CL
    s_CL = pm.Lognormal("s_CL", tt.log(0.2), .1)

    Cl = tt.exp(log_CL + pm.math.dot(X,betas_CL) + z_CL[np.unique(subject_ids)] * s_CL)

Because of the parmeterization, should I apply the jacobian correction?

I’m inclined to say No since I am first sampling parameters and then transforming them (as per the The Stan Manual)