Version dependant slowing down of Gaussian Mixture sampling in Ubuntu 20.04

One more thing, I tried replacing covariance with Cholesky factorization as suggested in MvNormal page as such:

sigmas = [pm.HalfNormal.dist(sigma=1, size=ndims) for i in range(n_clusters)]
chols = [pm.LKJCholeskyCov(f'chol_cov{i}', n=ndims, eta=2,
                           sd_dist=sigmas[i], compute_corr=True)[0] for
         i in range(n_clusters)]
# Define the mixture
components = [pm.MvNormal.dist(mu=centroids[i], chol=chols[i]) for i in range(n_clusters)]

In all of 5.8.2, 5.9.0 and 5.9.1 I get the following warning.

/home/avicenna/miniconda3/envs/pymc_env_5_8/lib/python3.11/site-packages/pytensor/compile/function/types.py:970: RuntimeWarning: invalid value encountered in accumulate
  self.vm()

However the computation is much faster on 5.8.2 and 5.9.0 (1-2 min as opposed to 10-20 mins). However 5.9.1 still gets stuck… Yet maybe this might help some other people having speed issues with their MvNormal mixtures (I do realize this a more flexible model in that previous one was just a diagonal one but increasing eta to 20, which should mostly produce diagonal cov, I still get the speed boost).

For your information here is the back-trace I get when I filter the warning as an error:

raceback (most recent call last):
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/compile/function/types.py", line 970, in __call__
    self.vm()
RuntimeWarning: invalid value encountered in accumulate

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/sampling/parallel.py", line 122, in run
    self._start_loop()
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/sampling/parallel.py", line 174, in _start_loop
    point, stats = self._step_method.step(self._point)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/arraystep.py", line 174, in step
    return super().step(point)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/arraystep.py", line 100, in step
    apoint, stats = self.astep(q)
                    ^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/base_hmc.py", line 198, in astep
    hmc_step = self._hamiltonian_step(start, p0.data, step_size)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/nuts.py", line 197, in _hamiltonian_step
    divergence_info, turning = tree.extend(direction)
                               ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/nuts.py", line 290, in extend
    tree, diverging, turning = self._build_subtree(
                               ^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/nuts.py", line 371, in _build_subtree
    return self._single_step(left, epsilon)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/nuts.py", line 330, in _single_step
    right = self.integrator.step(epsilon, left)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/integration.py", line 82, in step
    return self._step(epsilon, state)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/step_methods/hmc/integration.py", line 118, in _step
    logp = self._logp_dlogp_func(q_new, grad_out=q_new_grad)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/model/core.py", line 378, in __call__
    cost, *grads = self._pytensor_function(*grad_vars)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/compile/function/types.py", line 983, in __call__
    raise_with_op(
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/link/utils.py", line 535, in raise_with_op
    raise exc_value.with_traceback(exc_trace)
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/compile/function/types.py", line 970, in __call__
    self.vm()
RuntimeWarning: invalid value encountered in accumulate
Apply node that caused the error: CumOp{None, add}(Subtensor{::step}.0)
Toposort index: 324
Inputs types: [TensorType(float64, shape=(None,))]
Inputs shapes: [(3,)]
Inputs strides: [(-8,)]
Inputs values: [array([-inf,   0.,  inf])]
Outputs clients: [[Subtensor{::step}(CumOp{None, add}.0, -1)]]

Backtrace when the node is created (use PyTensor flag traceback__limit=N to make it longer):
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in access_term_cache
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in <listcomp>
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in access_term_cache
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in <listcomp>
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1204, in access_term_cache
    input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)

HINT: Use the PyTensor flag `exception_verbosity=high` for a debug print-out and storage map footprint of this Apply node.
"""

The above exception was the direct cause of the following exception:

RuntimeWarning: invalid value encountered in accumulate
Apply node that caused the error: CumOp{None, add}(Subtensor{::step}.0)
Toposort index: 324
Inputs types: [TensorType(float64, shape=(None,))]
Inputs shapes: [(3,)]
Inputs strides: [(-8,)]
Inputs values: [array([-inf,   0.,  inf])]
Outputs clients: [[Subtensor{::step}(CumOp{None, add}.0, -1)]]

Backtrace when the node is created (use PyTensor flag traceback__limit=N to make it longer):
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in access_term_cache
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in <listcomp>
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in access_term_cache
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in <listcomp>
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1204, in access_term_cache
    input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)

HINT: Use the PyTensor flag `exception_verbosity=high` for a debug print-out and storage map footprint of this Apply node.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/avicenna/Dropbox/data_analysis/MODELING/bayesian_clustering/debug/pymc_debug.py", line 80, in <module>
    idata = pm.sample(random_seed=random_seed)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/sampling/mcmc.py", line 764, in sample
    _mp_sample(**sample_args, **parallel_args)
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/sampling/mcmc.py", line 1153, in _mp_sample
    for draw in sampler:
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/sampling/parallel.py", line 448, in __iter__
    draw = ProcessAdapter.recv_draw(self._active)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pymc/sampling/parallel.py", line 330, in recv_draw
    raise error from old_error
pymc.sampling.parallel.ParallelSamplingError: Chain 3 failed with: invalid value encountered in accumulate
Apply node that caused the error: CumOp{None, add}(Subtensor{::step}.0)
Toposort index: 324
Inputs types: [TensorType(float64, shape=(None,))]
Inputs shapes: [(3,)]
Inputs strides: [(-8,)]
Inputs values: [array([-inf,   0.,  inf])]
Outputs clients: [[Subtensor{::step}(CumOp{None, add}.0, -1)]]

Backtrace when the node is created (use PyTensor flag traceback__limit=N to make it longer):
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in access_term_cache
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in <listcomp>
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in access_term_cache
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1049, in <listcomp>
    output_grads = [access_grad_cache(var) for var in node.outputs]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1374, in access_grad_cache
    term = access_term_cache(node)[idx]
  File "/home/avicenna/miniconda3/envs/pymc_env_5_9_0/lib/python3.11/site-packages/pytensor/gradient.py", line 1204, in access_term_cache
    input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)