If I understand correctly the log-probability of your modified exponential distribution is
\log\circ p(r|L)=2\log(r) - \frac{r}{L} - 3\log(L) - \log(2).
So your code could be
import pymc3 as pm
def custom_lpdf(L):
def lpdf(r):
return (2*pm.math.log(r)) - (r/L) - (3*pm.math.log(L)) - pm.math.log(2)
return lpdf
with pm.Model() as model:
r = pm.DensityDist('r', custom_lpdf(2), testval=1)
trace = pm.sample()
The drawn samples can then be found in trace['r']
. But in my test, this produced a lot of divergent samples, which may introduce a bias in your results. You can fix that by telling the Hamilton-Monte-Carlo-Sampler to sample its values in a transformed space, where the derivatives are nicer, by writing
import pymc3 as pm
def custom_lpdf(L):
def lpdf(r):
return (2*pm.math.log(r)) - (r/L) - (3*pm.math.log(L)) - pm.math.log(2)
return lpdf
with pm.Model() as model:
r = pm.DensityDist('r', custom_lpdf(2), testval=1,
transform=pm.distributions.transforms.Log())
trace = pm.sample()
If you want to read about divergent samples this may be a good starting point: https://docs.pymc.io/notebooks/Diagnosing_biased_Inference_with_Divergences.html
Edit: @junpenglao beat me to it^^
Edit2: Changed -2
to -pm.math.log(2)
as @ojhall94 pointed out correctly. Note, that this constant should not matter and can probably be omitted.