AssertionError on pytensor

Hi everyone,

I had a model that was previously running, but I didn’t save the environment. When trying to run it again, I’m getting a pytensor assertion error. The model is complex so it’s hard to make it simpler to see where it’s breaking. Based on the error below, do you have any ideas? Tried it in OSX and Windows, same errors. The envs were created with mamba create --override-channels -c conda-forge -n bayes3 pymc jupyterlab seaborn pyarrow numpyro and they all come from conda-forge.

Sampling: [N_approve, R, concentration_polls, concentration_results, election_party_baseline, election_party_baseline_sd, election_party_baseline_sd_baseline, election_party_baseline_sd_party_effect, election_party_sd, election_party_time_coefs, house_effects, house_election_effects_raw, house_election_effects_sd, lsd_baseline, lsd_party_effect_election_party_amplitude, lsd_party_effect_party_amplitude, party_baseline, party_baseline_sd, party_time_coefs_raw, poll_bias]
ERROR (pytensor.graph.rewriting.basic): Rewrite failure due to: local_IncSubtensor_serialize
ERROR (pytensor.graph.rewriting.basic): node: Add(AdvancedIncSubtensor.0, IncSubtensor{i}.0)
ERROR (pytensor.graph.rewriting.basic): TRACEBACK:
ERROR (pytensor.graph.rewriting.basic): Traceback (most recent call last):
  File "/Users/bernardo.caldas/miniforge3/envs/bayes3/lib/python3.12/site-packages/pytensor/graph/rewriting/basic.py", line 1909, in process_node
    replacements = node_rewriter.transform(fgraph, node)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/bernardo.caldas/miniforge3/envs/bayes3/lib/python3.12/site-packages/pytensor/graph/rewriting/basic.py", line 1081, in transform
    return self.fn(fgraph, node)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/bernardo.caldas/miniforge3/envs/bayes3/lib/python3.12/site-packages/pytensor/tensor/rewriting/subtensor.py", line 1234, in local_IncSubtensor_serialize
    assert o_type.is_super(tip.type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[3], line 1
----> 1 prior_2,trace_2,post_2 = b.sample_all(
      2     var_names=[
      3         "latent_popularity",
      4         "latent_pop_t0",
      5         "R",
      6         "noisy_popularity",
      7         "N_approve",
      8     ],
      9 )

File ~/projects/models/notebooks/../src/election_model.py:855, in ElectionsModel.sample_all(self, model, var_names, **sampler_kwargs)
    853 with model:
    854     prior_checks = pm.sample_prior_predictive()
--> 855     trace = pm.sample(return_inferencedata=True, **sampler_kwargs)
    856     post_checks = pm.sample_posterior_predictive(
    857         trace, var_names=var_names
    858     )
    860 return prior_checks, trace, post_checks

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/sampling/mcmc.py:716, in sample(draws, tune, chains, cores, random_seed, progressbar, progressbar_theme, step, var_names, nuts_sampler, initvals, init, jitter_max_retries, n_init, trace, discard_tuned_samples, compute_convergence_checks, keep_warning_stat, return_inferencedata, idata_kwargs, nuts_sampler_kwargs, callback, mp_ctx, blas_cores, model, **kwargs)
    713         auto_nuts_init = False
    715 initial_points = None
--> 716 step = assign_step_methods(model, step, methods=pm.STEP_METHODS, step_kwargs=kwargs)
    718 if nuts_sampler != "pymc":
    719     if not isinstance(step, NUTS):

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/sampling/mcmc.py:237, in assign_step_methods(model, step, methods, step_kwargs)
    229         selected = max(
    230             methods_list,
    231             key=lambda method, var=rv_var, has_gradient=has_gradient: method._competence(  # type: ignore
    232                 var, has_gradient
    233             ),
    234         )
    235         selected_steps.setdefault(selected, []).append(var)
--> 237 return instantiate_steppers(model, steps, selected_steps, step_kwargs)

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/sampling/mcmc.py:138, in instantiate_steppers(model, steps, selected_steps, step_kwargs)
    136         args = step_kwargs.get(name, {})
    137         used_keys.add(name)
--> 138         step = step_class(vars=vars, model=model, **args)
    139         steps.append(step)
    141 unused_args = set(step_kwargs).difference(used_keys)

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/step_methods/hmc/nuts.py:180, in NUTS.__init__(self, vars, max_treedepth, early_max_treedepth, **kwargs)
    122 def __init__(self, vars=None, max_treedepth=10, early_max_treedepth=8, **kwargs):
    123     r"""Set up the No-U-Turn sampler.
    124 
    125     Parameters
   (...)
    178     `pm.sample` to the desired number of tuning steps.
    179     """
--> 180     super().__init__(vars, **kwargs)
    182     self.max_treedepth = max_treedepth
    183     self.early_max_treedepth = early_max_treedepth

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/step_methods/hmc/base_hmc.py:109, in BaseHMC.__init__(self, vars, scaling, step_scale, is_cov, model, blocked, potential, dtype, Emax, target_accept, gamma, k, t0, adapt_step_size, step_rand, **pytensor_kwargs)
    107 else:
    108     vars = get_value_vars_from_user_vars(vars, self._model)
--> 109 super().__init__(vars, blocked=blocked, model=self._model, dtype=dtype, **pytensor_kwargs)
    111 self.adapt_step_size = adapt_step_size
    112 self.Emax = Emax

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/step_methods/arraystep.py:163, in GradientSharedStep.__init__(self, vars, model, blocked, dtype, logp_dlogp_func, **pytensor_kwargs)
    160 model = modelcontext(model)
    162 if logp_dlogp_func is None:
--> 163     func = model.logp_dlogp_function(vars, dtype=dtype, **pytensor_kwargs)
    164 else:
    165     func = logp_dlogp_func

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/model/core.py:626, in Model.logp_dlogp_function(self, grad_vars, tempered, **kwargs)
    620 ip = self.initial_point(0)
    621 extra_vars_and_values = {
    622     var: ip[var.name]
    623     for var in self.value_vars
    624     if var in input_vars and var not in grad_vars
    625 }
--> 626 return ValueGradFunction(costs, grad_vars, extra_vars_and_values, **kwargs)

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/model/core.py:333, in ValueGradFunction.__init__(self, costs, grad_vars, extra_vars_and_values, dtype, casting, compute_grads, **kwargs)
    329     outputs = [cost]
    331 inputs = grad_vars
--> 333 self._pytensor_function = compile_pymc(inputs, outputs, givens=givens, **kwargs)

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pymc/pytensorf.py:1039, in compile_pymc(inputs, outputs, random_seed, mode, **kwargs)
   1037 opt_qry = mode.provided_optimizer.including("random_make_inplace", check_parameter_opt)
   1038 mode = Mode(linker=mode.linker, optimizer=opt_qry)
-> 1039 pytensor_function = pytensor.function(
   1040     inputs,
   1041     outputs,
   1042     updates={**rng_updates, **kwargs.pop("updates", {})},
   1043     mode=mode,
   1044     **kwargs,
   1045 )
   1046 return pytensor_function

File ~/miniforge3/envs/bayes3/lib/python3.12/site-packages/pytensor/compile/function/__init__.py:318, in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input)
    312     fn = orig_function(
    313         inputs, outputs, mode=mode, accept_inplace=accept_inplace, name=name
    314     )
    315 else:
    316     # note: pfunc will also call orig_function -- orig_function is
    317     #      a choke point that all compilation must pass through
--> 318     fn = pfunc(
    319         params=inputs,
    320         outputs=outputs,
    321         mode=mode,
    322         updates=updates,
    323         givens=givens,
    324         no_default_updates=no_default_updates,
    325         accept_inplace=accept_inplace,
    326         name=name,
    327         rebuild_strict=rebuild_strict,
    328         allow_input_downcast=allow_input_downcast,
    329         on_unused_input=on_unused_input,
    330         profile=profile,
    331         output_keys=output_keys,
    332     )
    333 return fn

Do you have some reproducible code we can run?

Hey Ricardo, I’m trying to strip the model down as much as possible while still keeping the error

1 Like

Hi @ricardoV94 , unfortunately I have been unsuccessful in stripping down the model while keeping the error. I’ve found that it runs under fast_compile but fails under fast_run. It seems to have something to do with the HSGP I’ve added to the model.

Given that it’s running this is not of huge priority, so I’m inclined toet it go, but there seems to be a combination of data+model that breaks this. Given the data is public, if you’re interested at all I can send you the repo, otherwise I’ll close this. Let me know what you think!

1 Like

Feel free to share the repo/code

estimadorpt/models (github.com)

Running it from src/main.py

Right now with fast_compile runs, not fast_run

Btw, just found out that if I comment this out, I get the error back:
os.environ[“PYTENSOR_FLAGS”] = “optimizer_excluding=local_IncSubtensor_serialize”