Thank you very much @chartl.
Your guide code definitely works but I have a question.
Actually, most of the examples I could find were pm.DensityDist as a prior.
The one reason why I get puzzled using pm.DensityDist as a likelihood function is that the code looks like handling all the prior as a part of observation data which is quite different from a general modeling in pyMC3.
If there is a shifted Weibull distribution implemented in pyMC3, then the likelihood function would be like:
likelihood = cWeibull('obs', alpha=a, beta=b, loc=l, observed=obs_Data)
The code you kindly presented for me is like:
likelihood = pm.DensityDist('cWeibull', logp=custom_weibull, observed={'a': a_pr, 'b': b_pr, 'l': l_pr, 'x': theano.shared(obs_data)})
The only variable that contains observed data is “obs_data”, but the likelihood using pm.DensityDist looks like the priors are also a part of observed dictionary.
Actually, either way the result I get is almost the same which I am quite surprised.
For your informaton, I defined the custom Weibull written above as follows before you helped me how to use pm.DensityDist:
from pymc3.theanof import floatX
from pymc3.distributions import transforms
from pymc3.util import get_variable_name
from pymc3.distributions.dist_math import (
alltrue_elemwise, betaln, bound, gammaln, i0e, logpow,
normal_lccdf, normal_lcdf, SplineWrapper, std_cdf)
from pymc3.distributions.distribution import (Continuous, draw_values, generate_samples)
class cWeibull(pm.Continuous):
def __init__(self, alpha, beta, loc, *args, **kwargs):
super().__init__(*args, **kwargs)
self.alpha = alpha = tt.as_tensor_variable(floatX(alpha))
self.beta = beta = tt.as_tensor_variable(floatX(beta))
self.loc = loc = tt.as_tensor_variable(floatX(loc))
def logp(self, value):
alpha = self.alpha
beta = self.beta
loc = self.loc
return tt.switch(tt.le(value - loc, 0.0),-np.inf,
bound( tt.log(alpha) - tt.log(beta) + (alpha - 1) * tt.log((value - loc) / beta) - ((value - loc) / beta)**alpha,
value >= 0, alpha > 0, beta > 0, loc >= 0))
I’d like to know why I could get the same result, and whether there is no problem defining a custom likelihood function in that way (using pm.DensityDist).
Additionally, you mentioned a positive variable “x_minus_l”.
Actually, you exactly see the painful part I am suffering from.
Because I am dealing with the wind speed, to prevent a divergence of log likelihood to infinite which is caused by a negative x-l value, I defined the custom Weibull using tt.switch and bound in the code above.
Is there a smarter way to prevent the divergence of log likelihood?
Thanks a lot.