I read the post, but seems it does not answer my question. In the post, the purpose is to get posterior distribution of three parameters (m, q and sigmay). And four variables (‘logEiso’, ‘logEpeak_i’, ‘logErrEiso’, and ‘logErrEpeak_i’) are observed data.
In the final model, priors are given to the three parameters:
m = pm.Uniform('m',0.,1.)
q = pm.Uniform('q',0.,5.)
sigmay = pm.Uniform('sigmay',0.,1.)
the four variables are input as observed argument in this way:
observed={'logEiso':logEiso, 'logEpeak_i':logEpeak_i,
'logErrEiso':logErrEiso, 'logErrEpeak_i':logErrEpeak_i}
To do this, the likelihood is defined as:
def likelihoodMCMC(logEiso,logEpeak_i,logErrEiso,logErrEpeak_i):
return tt.exp(0.5*tt.sum(
tt.log(1/(2*np.pi*(sigmay**2 + logErrEpeak_i**2 + (m*logErrEiso)**2))) -
(logEpeak_i - m*logEiso - q)**2/(sigmay**2 + logErrEpeak_i**2 + (m*logErrEiso)**2)))
And the observations is defined at the following line:
obs_peaks = pm.DensityDist('obs_peak', likelihoodMCMC,
observed={'logEiso':logEiso, 'logEpeak_i':logEpeak_i,
'logErrEiso':logErrEiso, 'logErrEpeak_i':logErrEpeak_i})
I understand this logic of codes to do posterior sampling for the three parameters. However, in my question, the purpose is not to get samples from the posterior distributions of the three parameters. Instead, I have already known their values. The current target is to get samples from the likelihoodMCMC itself (but not the posterior of the three parameters). According to the definition of likelihood function, when the three parameters are given, likelihoodMCMC is a probability distribution of the four variables (‘logEiso’, ‘logEpeak_i’, ‘logErrEiso’, and ‘logErrEpeak_i’). I want the final results give me a series of vectors, and each vector has four components, representing values of ‘logEiso’, ‘logEpeak_i’, ‘logErrEiso’, and ‘logErrEpeak_i’, respectively.
Could you please let me know if pymc3 can do this, and if the answer id yes, how?