Pm.logp() does not evaluates well -- on some cases

When computing the logp of a Beta distribution, I’m getting inf instead of a finite value for a simple case (alpha=beta=100, x=0.5). This happens with pm.logp(pm.Beta.dist(a, b), 0.5).eval().

also see inconsistent behavior in a hand-written PyTensor logp expression depending on whether alpha/beta are passed as Python ints vs pt.as_tensor_variable(int).

import pymc as pm
import numpy as np
import scipy.stats as st
import pytensor.tensor as pt


a=100
b=100
val = 0.5
## logp taken from the source code to test
def logp(value, alpha, beta):
res = (
pt.switch(pt.eq(alpha, 1.0), 0.0, (alpha - 1.0) * pt.log(value))
+ pt.switch(pt.eq(beta, 1.0), 0.0, (beta - 1.0) * pt.log1p(-value))
- (pt.gammaln(alpha) + pt.gammaln(beta) - pt.gammaln(alpha + beta))
    )
res = pt.switch(pt.bitwise_and(pt.ge(value, 0.0), pt.le(value, 1.0)), res, -np.inf)

return res.eval()

# case 1
print("evaluates to 2.4220734:",logp(value=val,alpha=a,beta=b))

# case 2
print("evaluates to inf:",logp(value=val,alpha=pt.as_tensor_variable(a,name="alpha"),beta=pt.as_tensor_variable(b, name="beta")))



## which evaluates this to inf aswell
print(
"which evaluates this to inf aswell:",
    (pm.logp(pm.Beta.dist(a,b),0.5).eval())
)


Terminal>>

evaluates to 2.4220734: 2.4220734
evaluates to inf: inf
which evaluates this to inf aswell: inf

Yes, this is a known issue with integers, because pt.as_tensor(2) will be like int8 dtype, very low precision: Default casting to int8 leads to easy overflow errors · Issue #1073 · pymc-devs/pytensor · GitHub

1 Like