Dear all,

I’m trying to wrap a distribution from scipy with DensityDist. (I’m aware of the performance drawbacks in comparison to implementing this as a new distribution / aesara op)

Below is what I have tried:

```
from aesara.compile.ops import as_op
import aesara.tensor as at
import scipy
import pymc as pm
import numpy as np
# Data with: M, n, N, odds = 140, 80, 60, 0.8
data = np.array([35, 26, 35, 31, 27, 30, 27, 35, 29, 38, 27, 34, 33, 30, 29, 29, 31,
31, 34, 31, 35, 30, 34, 29, 33, 34, 34, 32, 33, 30, 29, 33, 31, 31,
35, 34, 33, 30, 34, 35, 31, 31, 32, 30, 33, 29, 35, 34, 30, 30, 31,
34, 31, 31, 32, 29, 31, 30, 26, 33, 31, 28, 34, 30, 28, 28, 29, 32,
35, 34, 36, 30, 35, 30, 26, 31, 30, 37, 30, 30, 35, 34, 28, 30, 30,
33, 34, 31, 35, 34, 32, 32, 36, 30, 35, 32, 34, 32, 31, 37])
with pm.Model() as model:
@as_op(itypes=[at.dvector], otypes=[at.dvector])
def logp_(value):
return scipy.stats.nchypergeom_wallenius(140, 80, 60, odds=odds.eval()).logpmf(value)
@as_op(itypes=[at.dvector], otypes=[at.dvector])
def logcdf_(value):
return scipy.stats.nchypergeom_wallenius(140, 80, 60, odds=odds.eval()).logcdf(value)
@as_op(itypes=[at.dvector], otypes=[at.ivector])
def random_(value):
return scipy.stats.nchypergeom_wallenius(140, 80, 60, odds=odds.eval()).rvs()
odds = pm.Beta('odds', 1, 1)
y = pm.DensityDist("WNCH", logp=logp_, logcdf=logcdf_, random=random_, observed=data)
trace = pm.sample()
```

It samples but does not fit odds which is apparently just sampled from the prior. I’m really unsure about calling eval when passing odds to scipy … but without it there will be errors which makes some sense as scipy cannot work with the symbolic variables. I’m also unsure about the input types being vectors. I tried scalars and was hoping the magic of broadcasting would take care of dealing with the multiple observation, but if I use scalars, there will be shape errors.

Any help of hints would be highly appreciated!

Many thanks

Jan