Good morning PyMC Community!

I’m writing to see if any one has any ideas on how best to port this Stan code for a spatial capture-recapture model to PyMC.

```
for (i in 1:M) {
lp_if_present[i] = bernoulli_lpmf(1 | psi)
+ binomial_logit_lpmf(y[i, ] | n_occasion, logit_p[i, ]);
if (sum(y[i, ]) > 0) {
log_lik[i] = lp_if_present[i];
} else {
log_lik[i] = log_sum_exp(lp_if_present[i], bernoulli_lpmf(0 | psi));
}
```

In this case, `M`

is an integer, `y`

is an array with binomial counts with shape `(M, n_trap)`

, `psi`

is the probability for the non-zero portion of the mixture, `logit_p`

is an array of logit-scale probabilities for the number of times that the individual is captured in each trap in `n_occassion`

s.

At first glance it seems like a straightforward implementation of the handy `pm.ZeroInflatedBinomial`

. I haven’t figured out how to implement that, however, because the zero-inflation happens along the rows, that is, at the individual-level. In other words, if `y[0, 4] = 5`

, we know that `psi = 1`

for `y[0, 5] = 0`

. But if `all(y[100, :]) == 0`

, then `0 < psi < 1`

. Hopefully that makes sense.

This was my attempt. It feels dumb, but it seemed to work for a simple, simulated example. I’m not sure that I trust it in the real world

```
from pymc.distributions.dist_math import binomln, logpow
def logp(value, n_occasion, p, psi):
binom = binomln(n_occasion, value) + logpow(p, value) + logpow(1 - p, n_occasion - value)
bin_sum = pm.math.sum(binom, axis=1)
bin_exp = pm.math.exp(bin_sum)
res = pm.math.switch(
value.sum(axis=1) > 0,
bin_exp * psi,
bin_exp * psi + (1 - psi)
)
return pm.math.log(res)
```

Anyway, I would appreciate any help with this fool’s errand! Thanks.