Right, this is making sense. Ultimately it may be more effective to fix this on the likelihood side (adjust the backend library to return -inf, or a specific exception, if any of the possible extreme-parameter cases are passed to it).
The switch statement is cool. I’m hoping to be able to provide more generalized advice on working with this class of “generalized” DDM models. GDDMs can take arbitrary (even user-defined) extensions to their parameterizations, which means a different switch statement for each different model.
What I was meaning with the “interrupting” is something like this:
Let’s say the “model logp” is the sum of the logp of its “factors,” where each “factor” is a RV or a potential. To the best I can tell, pymc calculates the model logp as sum([sum(factor) for factor in logp_factors]) where logp_factors is ordered in a particular way.
So if any of the terms of the sum is -inf, the total sum would also be -inf.* Hence why do we need to keep calculating the rest of the terms in the above list comprehension, if a previous term evaluates to -inf?
I think this makes sense in terms of being statistically well-founded (the step methods would still get -inf for the logp in what I’m asking)… but maybe I’m wrong. Also I’m not sure if there are other considerations: implementing this in a pytensor Op, or if the Sum accumulator necessarily computes the sum in serial and in the order that the terms are passed, etc…
* well an edge case would be if one term is -inf and another is inf, then it’s not exactly obvious what we want to return. In numpy this gives a nan and maybe a warning. Not sure if we would experience this case ever or what would happen…