Initializing values for beta distribution to avoid -inf penalties

Hello,

I’m currently working on a PyMC model where I’m trying to initialize values for a Beta distribution to ensure that a custom penalty potential does not evaluate to -inf. Here is the code snippet:

n = 3
with pm.Model() as model:
    tau_latent = pm.Beta('tau_latent',
                         alpha=1,
                         beta=1,
                         shape=n,
                         initval=(np.linspace(0 + 0.1,
                                              1. - 0.1, n))
                         )
      pm.Potential(
              "penalty_last_one",
              pm.math.switch(
                  pt.any(pt.lt(tau_latent[-1] - 1.0, 0.1)), -np.inf,
                 0
                  )) 

I have configured the initval for tau_latent in such a way that I expect the penalty_last_one potential to not result in -inf. However, I still encounter a SamplingError indicating that the initial evaluation of the model at the starting point failed, specifically mentioning that 'penalty_last_one': -inf.

The starting values provided in the error message are as follows: {'tau_latent_logodds__': array([-4.36858715, -1.53270918, -0.31507496, 0.73444259, 2.08084293])}.

When attempting to convert these log-odds back to probabilities using np.exp, it seems it might not be correctly calculating the initial values. Could someone help clarify the correct approach to set the initval for the Beta distribution in this context, ensuring the custom potential does not return -inf?

Thank you for your assistance

You probably want

pm.Potential(
            "penalty_last_one",
            pm.math.switch(
                pt.any(pt.lt(tau_latent[-1] - 1.0, 0.1)), 0,
               -np.inf
                ))

In any, the first value is the one that is taken when the condition is True.

tip: If you want to check if certain expressions are evaluating to what you expect them to evaluate, given the initial values, you can do:

pt.lt(tau_latent[-1] - 1.0, 0.1).eval({tau_latent:[0.1, 0.5, 0.9]})
pot.eval({tau_latent:[0.1, 0.5, 0.9]})

where pot would be your potential term.

thanks, what a simple mistake :sweat_smile: