Invalid value encountered working with ElemWise

Hello PyMC team!

I’m currently trying to implement an improvement to my inference problem by adding some new parameters. Firstly I’m working with synthetic data and in several runs, when generating this data randomly, an error occurs. The code follows below

def model_prop(nw, ng, krw0, krg0, fmmob, sf, sfbet):
    krg, krw = perm_prop( nw, ng, krw0, krg0)

    sw_fg = Swc + (1.0 - Swc - Sgr) * np.power((muw * (1.0 - fg_obs)) / (krw0 * mu_app_obs ) , 1.0 / nw)

    se_fg = (sw_fg - Swc)/ (1. - Swc - Sgr)
    # se_fg = np.array([ np.max( 0, np.min(s,1) ) for s in se_fg])
    
    krw_fg = krw0*np.power(se_fg,nw)
    krg_fg = krg0*np.power(1-se_fg,ng)
    mob_w_fg = krw_fg / muw
    mob_g_fg = krg_fg / mug

    F2 = 0.5 + (1.0/np.pi) * np.arctan(sfbet * (sw_fg - sf))
    mrf = 1 + fmmob * F2

    mu_app = 1/(mob_w_fg + (mob_g_fg/mrf))

    return krg, krw, mu_app

with pm.Model() as model_pm:
    nw = pm.Uniform('nw', lower=nw_params['a'], upper=nw_params['b'])
    ng = pm.Uniform('ng', lower=ng_params['a'], upper=ng_params['b'])
    krw0 = pm.Uniform('krw0', lower=krw0_params['a'], upper=krw0_params['b'])
    krg0 = pm.Uniform('krg0', lower=krg0_params['a'], upper=krg0_params['b'])

    fmmob = pm.Uniform('fmmob', lower=fmmob_params['a'], upper=fmmob_params['b'])
    sf = pm.Uniform('sf', lower=SF_params['a'], upper=SF_params['b'])
    sfbet = pm.Uniform('sfbet', lower=sfbet_params['a'], upper=sfbet_params['b'])

    mu = pm.math.concatenate( model_prop( nw, ng, krw0, krg0, fmmob, sf, sfbet) )
    sigma = pm.HalfNormal('sigma', sigma=optim.chisqr**(1/2))
    pred = pm.Normal('pred', mu=mu, sd=sigma, observed=full_obs)

    step = pm.Metropolis()
    trace = pm.sample( 10000,
        tune=2000,
        step=step, 
        )

and the error:

/tmp/ipykernel_17887/150628108.py:19: RuntimeWarning: invalid value encountered in power
  krg_fg = krg0*np.power(1-se_fg,ng)

---------------------------------------------------------------------------
SamplingError                             Traceback (most recent call last)
/tmp/ipykernel_17887/4094982771.py in <module>
     52 
     53     step = pm.Metropolis()
---> 54     trace = pm.sample( 10000,
     55         tune=2000,
     56         step=step,

~/.local/lib/python3.8/site-packages/pymc3/sampling.py in sample(draws, step, init, n_init, start, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, callback, jitter_max_retries, return_inferencedata, idata_kwargs, mp_ctx, pickle_backend, **kwargs)
    426     start = deepcopy(start)
    427     if start is None:
--> 428         check_start_vals(model.test_point, model)
    429     else:
    430         if isinstance(start, dict):

~/.local/lib/python3.8/site-packages/pymc3/util.py in check_start_vals(start, model)
    235 
    236         if not np.all(np.isfinite(initial_eval)):
--> 237             raise SamplingError(
    238                 "Initial evaluation of model at starting point failed!\n"
    239                 "Starting values:\n{}\n\n"

SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'nw_interval__': array(0.), 'ng_interval__': array(4.4408921e-16), 'krw0_interval__': array(0.), 'krg0_interval__': array(0.), 'fmmob_interval__': array(0.), 'sf_interval__': array(4.4408921e-16), 'sfbet_interval__': array(4.4408921e-16), 'sigma_log__': array(-3.53287655)}

Initial evaluation results:
nw_interval__      -1.39
ng_interval__      -1.39
krw0_interval__    -1.39
krg0_interval__    -1.39
fmmob_interval__   -1.39
sf_interval__      -1.39
sfbet_interval__   -1.39
sigma_log__        -0.77
pred                 NaN
Name: Log-probability of test_point, dtype: float64

It seems that the problem occurs in the recovery of the se_fg var, which I was able to deal elsewhere using the code that appears commented out in the excerpt I sent. But with the structures used by PyMC I couldn’t do the same.

How would it be possible to solve this problem?

My guess is that se_fg is negative and -1<nw<1. Is that possible? You can use tt.clip() to clip tensors (which is what you were doing on the commented line), but I would guess that doing so will just mask problems with your prior, problems with your likelihood, or problems with your the combination of your prior and likelihood.

1 Like

Thanks for your answer! se_fg is actually taking negative values, and that’s a problem because 0<=se_fg<=1, so I applied the following strategy to replace the unwanted values:

    if tt.lt(se_fg,0).all():
        se_fg = 0.0 * se_fg
    if tt.gt(se_fg,1).all():
        se_fg = 1.0 * se_fg

But using your hint with tt.clip(se_fg,0,1) makes the code much prettier :slight_smile:
Thanks for your help!

1 Like

And likely faster. Bonus!

1 Like