Initial evaluation of model at starting point failed!

Hello,
have a variable phi = pm.Uniform(“phi”, 0.1,1). However, when I sample from the model, the initial value for phi is negative which leads to undefined likelihood and the failure. The full code/error is below. Not sure why I get a negative number, when the support for my distribution is strictly positive.

with pm.Model(coords={"predictors": X.columns.values}) as beta_reg:
    beta = pm.Normal("beta", 0,1, dims="predictors")
    beta0 = pm.Normal("beta0", 1, 1)
    phi = pm.Uniform("phi", 0.1,1)
    mu = pm.Deterministic("mu",pm.math.invlogit(beta0 + at.dot(X.values, beta)))
    a =  phi*mu
    b =  (1.0 - mu)*phi
    ratio = pm.Beta("ratio", alpha = a,beta = b, observed=y.values)
    pm.sample()

---------------------------------------------------------------------------
SamplingError                             Traceback (most recent call last)
Input In [196], in <cell line: 1>()
      7 b =  (1.0 - mu)*phi
      8 ratio = pm.Beta("ratio", alpha = a,beta = b, observed=y.values)
----> 9 pm.sample()

File ~/miniconda3/lib/python3.9/site-packages/pymc/sampling.py:558, in sample(draws, step, init, n_init, initvals, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, callback, jitter_max_retries, return_inferencedata, idata_kwargs, mp_ctx, **kwargs)
    556 # One final check that shapes and logps at the starting points are okay.
    557 for ip in initial_points:
--> 558     model.check_start_vals(ip)
    559     _check_start_shape(model, ip)
    561 sample_args = {
    562     "draws": draws,
    563     "step": step,
   (...)
    573     "discard_tuned_samples": discard_tuned_samples,
    574 }

File ~/miniconda3/lib/python3.9/site-packages/pymc/model.py:1794, in Model.check_start_vals(self, start)
   1791 initial_eval = self.point_logps(point=elem)
   1793 if not all(np.isfinite(v) for v in initial_eval.values()):
-> 1794     raise SamplingError(
   1795         "Initial evaluation of model at starting point failed!\n"
   1796         f"Starting values:\n{elem}\n\n"
   1797         f"Initial evaluation results:\n{initial_eval}"
   1798     )

SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'beta': array([ 0.16049949, -0.22746862, -0.69406021, -0.89434199, -0.96045462,
        0.81442739,  0.65041207]), 'beta0': array(0.10526258), 'phi_interval__': array(-0.30358915)}

Initial evaluation results:
{'beta': -8.12, 'beta0': -1.32, 'phi': -1.41, 'ratio': inf}

Welcome!

The initial value for the transformed version of phi, which PyMC uses internally because phi is bound to the interval [0.1, 1], is negative. But as you can see from the initial evaluation results, the initial value of phi is fine, it’s the value of the observe ratio that is the problem. I suspect that either a or b are negative or that your observed data includes values outside the interval [0, 1] (which is where the beta distribution has support).

The reason I thought it was the phi parameter, is because, when I set a = 1 it works fine, but, when I set a = phi, I get the error. Value of mu is between 0 and 1, it is output of invlogit.

I think what causes it is that some of my y values are 0. I guess I need to change the model to allow zeros. Something like zero-inflated model Beta model. Although, not sure how to add the zero-inflated component.

2 Likes

Sorry, I didn’t get it. The error says:

Initial evaluation results:
{'beta': -8.12, 'beta0': -1.32, 'phi': -1.41, 'ratio': inf}

where ‘phi’ equals to -1.41, which is negative. So, there is something wrong

My guess is that the quantity beta0 + X @ beta is very negative, which makes the output of the sigmoid zero, so a is zero. What is the scale of the X data? If it’s quite large, you will run into this problem. It’s always a good idea to scale your input data.

I probably realized why I was confused as well as VadimSokolov at the beginning.

Here:

Initial evaluation results:

are the logp values, not the values of the variables. It would be better to modify the messaging in pymc to make it more clear.

I think the message has been tweaked already in recent versions of PyMC? Which one are you using?

1 Like

That’s strange, because I installed one of the latest v.5.7.2

@ricardoV94, my initial message was motivated by the original post, but I check my version and indeed it’s been changed to the correct one

Thanks a lot!