There is some numerical problem of the ExGaussian log_prob. Specifically, the std_cdf in the line below returns 0., which resulting in logpow returns -inf
-------
TensorVariable
"""
mu = self.mu
sigma = self.sigma
nu = self.nu
# This condition suggested by exGAUS.R from gamlss
lp = tt.switch(tt.gt(nu, 0.05 * sigma),
- tt.log(nu) + (mu - value) / nu + 0.5 * (sigma / nu)**2
+ logpow(std_cdf((value - mu) / sigma - sigma / nu), 1.),
- tt.log(sigma * tt.sqrt(2 * np.pi))
- 0.5 * ((value - mu) / sigma)**2)
return bound(lp, sigma > 0., nu > 0.)
def _repr_latex_(self, name=None, dist=None):
if dist is None:
dist = self
sigma = dist.sigma
mu = dist.mu
nu = dist.nu
I opened https://github.com/pymc-devs/pymc3/issues/4045 , for now, try using larger sigma (e.g., a truncated Normal prior to avoid small sigma)
1 Like