Custom Weibull distribution

For DensityDist you just need a theano-implemented version of the log-likelihood. A random sampler is helpful, but you can cheat a little bit and have NUTS draw samples for you if you want some data:

def custom_weibull(x, a, b, l):
    return tt.log(a) - tt.log(b) + (a - 1) * (tt.log(x - l) - tt.log(b)) - tt.pow((x-l)/b, a)

with pm.Model() as PriorModel:
    a_true = pm.Constant('at', 3.)
    b_true = pm.Constant('bt', 12.)
    l_true = pm.Constant('lt', 1.)
    x_look = pm.Deterministic('xl', l_true + pm.HalfNormal('xc', 10.))
    cweibull = pm.DensityDist('cWeibull', logp=custom_weibull, observed={'a': a_true, 'b': b_true, 'l': l_true, 'x': x_look})
    res = pm.sample(1000, nuts_kwargs={'target_accept': 0.9, 'max_treedepth': 25})
    
pm.traceplot(res);

xl here holds an (approximate) CustomWeibull distribution (if you want it to be exact, use a massive improper uniform density; but HalfNormal(10) is pretty close), so we can take some data and use it for inference

obs_data = res['xl'][500:700]
with pm.Model() as inference_model:
    a_pr = pm.HalfNormal('a_pr', 2.)
    b_pr = pm.HalfNormal('b_pr', 2.)
    l_pr = pm.HalfNormal('l_pr', 2.)
    cweibull = pm.DensityDist('cWeibull', logp=custom_weibull, observed={'a': a_pr, 'b':  b_pr, 'l': l_pr, 'x': theano.shared(obs_data)})
    res_inf = pm.sample(1000, nuts_kwargs={'target_accept': 0.9, 'max_treedepth': 25})

pm.traceplot(res_inf)

Now there are lots of divergences using this parameterization; and I’m sure they all come from

tt.log(x-l)

as l can be sampled so that it exceeds some observed values. It might be a good idea to re-parameterize so that you can sample from a positive variable “x_minus_l”, and not have to worry about invalid log-likelihoods.

Good luck!

1 Like