Unexpected Initial evaluation results

Hi,

I would like to know why I keep getting very weird initial evaluation results even though I set the initval as 1 for all my prior distributions.

with pm.Model() as model:
     a1  = pm.HalfNormal('a1', sigma=10, initval=1)
     a2  = pm.HalfNormal('a2', sigma=10, initval=1) 
     a3  = pm.HalfNormal('a3', sigma=10, initval=1)
     a4  = pm.HalfNormal('a4', sigma=10, initval=1)
     a5  = pm.HalfNormal('a5', sigma=10, initval=1)
     a6  = pm.HalfNormal('a6', sigma=10, initval=1)
     sigma = pm.Exponential("sigma",1)

     y_obs = np.exp((t - mean_t) / std_t) 
     mu = fn(a1, a2, a3, a4, a5, a6, x) # this function always produces positive values

     pm.Gamma("obs", mu = mu, sigma = sigma, observed=y_obs)

    idata = pm.sample_prior_predictive(samples=len(x), random_seed=rng)
    idata.extend(pm.sample(4000, tune=4000, random_seed=rng, chains=2, target_accept=0.999))

The error message is:

Auto-assigning NUTS sampler...
INFO:pymc:Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
INFO:pymc:Initializing NUTS using jitter+adapt_diag...

---------------------------------------------------------------------------
SamplingError                             Traceback (most recent call last)
Input In [224], in <cell line: 2>()
observed=y_bos)
     62 idata = pm.sample_prior_predictive(samples=len(p), random_seed=rng)
---> 63 idata.extend(pm.sample(4000, tune=4000, random_seed=rng, chains=2, target_accept=0.999))

File /opt/anaconda3/envs/pymc/lib/python3.10/site-packages/pymc/sampling.py:558, in sample(draws, step, init, n_init, initvals, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, callback, jitter_max_retries, return_inferencedata, idata_kwargs, mp_ctx, **kwargs)
    556 # One final check that shapes and logps at the starting points are okay.
    557 for ip in initial_points:
--> 558     model.check_start_vals(ip)
    559     _check_start_shape(model, ip)
    561 sample_args = {
    562     "draws": draws,
    563     "step": step,
   (...)
    573     "discard_tuned_samples": discard_tuned_samples,
    574 }

File /opt/anaconda3/envs/pymc/lib/python3.10/site-packages/pymc/model.py:1725, in Model.check_start_vals(self, start)
   1722 initial_eval = self.point_logps(point=elem)
   1724 if not all(np.isfinite(v) for v in initial_eval.values()):
-> 1725     raise SamplingError(
   1726         "Initial evaluation of model at starting point failed!\n"
   1727         f"Starting values:\n{elem}\n\n"
   1728         f"Initial evaluation results:\n{initial_eval}"
   1729     )

SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'a1_log__': array(-3.79408301), 'a2_log__': array(-0.88779326), 'a3_log__': array(-0.49539466), 'a4_log__': array(0.7710357), 'a5_log__': array(-0.77415039), 'a6_log__': array(-0.3820694), 'sigma_log__': array(-0.28868408)}

Initial evaluation results:
{'a1': -6.32, 'a2': -3.42, 'a3': -3.03, 'a4': -1.78, 'a5': -3.3, 'a6': -2.91, 'sigma': -1.04, 'obs': nan}

What should I modify first?

You can try to fix your likelihood mu or sigma to a constant to see which one is causing nan.

Also you can disable jittering in the initialization of NUTS (in pm.sample) to see if that is what’s causing problems.

1 Like

Specifically, you can use the init argument to pm.sample() to select one of several difference initialization strategies (list is here).

2 Likes

Thank you for this solution – so far, with advi init option, it solved the issue. Now I am trying to figure out why jitter+diag option caused the problem. Thank you so much!

I tried to fix it by changing likelihood mu or sigma, but it did not help. However, as you suggested, without jittering (but with advi init) solved the issue. Thank you for your time!

Jittering can cause problems if your model is very sensitive to very small changes in the initial point (because of the jittering). I often find this to true when the data/model parameters are taking on values that are small enough or big enough that numerical representation issues lurk “nearby” a reasonable starting point.

1 Like

This makes a lot of sense. Thank you so much for sharing this valuable experience!