Runtime broadcasting not allowed with subtensor to handle unbalanced panel

Hi all,

I have data from M instruments, where each instrument records three numbers. The different instruments (which I call channels) take different numbers of observations with N total observations, so the data are recorded in an array with shape (N, 4), where each row takes the form (x1, x2, x3, ch). Each instrument has a different resolution, which is recorded in a (M, 1) array. In the likelihood, I have these lines which go from resolution by channel to resolution by data point, using the fact that I have the metadata for each data point, based on some posts I have seen about unbalanced panels.

ch_indices = value[:,-1].astype('int32') - 1  
    
resolution_per_event = resolution_bych[:, ch_indices]

Originally, I had no issue with pymc 5.23.0 and pytensor 2.31.6, but I recently moved to a different machine with pymc 5.27.1 and pytensor 2.37.0, and now I get this error:

ValueError: Runtime broadcasting not allowed. AdvancedIncSubtensor1 was asked to broadcast the second input (y) along a dimension that was not marked as broadcastable. If broadcasting was intended, use `specify_broadcastable` on the relevant dimension(s).
Apply node that caused the error: AdvancedIncSubtensor1{inplace,inc}(AdvancedIncSubtensor1{inplace,inc}.0, Composite{(((0.015052822133943787 * i0 * i1 * i2 * i3 * i4 * i5 * i6 * i7 * i8 * i9) / (composite{sqr(sqr(i0))}(i9) * i10)) + ((-0.015052822133943787 * i11 * i12 * i13 * i3 * i4 * i5 * i14 * i15) / (i16 * i9)))}.0, Composite{(-1 + cast{int32}(i0))}.0)
Toposort index: 6537
Inputs types: [TensorType(float64, shape=(988, 1)), TensorType(float64, shape=(None, 1)), TensorType(int32, shape=(None,))]
Inputs shapes: [(988, 1), (1, 1), (1,)]
Inputs strides: [(8, 8), (8, 8), (4,)]
Inputs values: ['not shown', array([[4.78976725e-05]]), array([163], dtype=int32)]
Outputs clients: [[Transpose{axes=[1, 0]}(AdvancedIncSubtensor1{inplace,inc}.0)]]

When I sample instead with with pytensor.config.change_flags(mode=“FAST_COMPILE”): I get the following error

AttributeError: 'Scratchpad' object has no attribute 'ufunc'
Apply node that caused the error: Add(Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Sum{axis=1}.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0, Mul.0)
Toposort index: 5333
Inputs types: [TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,)), TensorType(float64, shape=(1,))]
Inputs shapes: [(1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,), (1,)]
Inputs strides: [(8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,), (8,)]
Inputs values: [array([-4.2879988]), array([-0.04128957]), array([0.]), array([nan]), array([-0.28460879]), array([0.]), array([18.39713441]), array([17.20057779]), array([0.]), array([3.41462699]), array([3.05119675]), array([0.]), array([-0.01580804]), array([-3.72083804]), array([0.]), array([5.08187692]), array([5.68249252]), array([0.]), array([8.94576271]), array([-0.02708504]), array([0.]), array([11.47833901]), array([2.2894161]), array([0.]), array([10.65238756]), array([7.747105]), array([0.]), array([18.12581245]), array([14.2460399]), array([0.]), array([4.6540399]), array([15.01225914]), array([0.]), array([5.48506862]), array([5.45417867]), array([0.]), array([14.83597904]), array([7.54816756]), array([0.]), array([2.35148759]), array([14.27202046]), array([0.]), array([38.32995848]), array([16.70820967]), array([0.]), array([27.62051401]), array([1.64847679]), array([0.]), array([5.4059814]), array([7.55477289]), array([0.]), array([0.30526211]), array([1.02577146]), array([0.]), array([-6.32408741]), array([-26.03229914]), array([0.]), array([5.19699546]), array([20.07385512]), array([0.]), array([4.59940222]), array([-2.58556774]), array([-0.09323889]), array([-3.89170259]), array([-743.85592861]), array([-1.86300651e-09]), array([2.07716941]), array([-1.49252035]), array([0.]), array([8.77524458]), array([7.23861379]), array([0.]), array([13.11552743]), array([18.16641923]), array([0.]), array([0.54810408]), array([0.31739009]), array([0.02117952]), array([0.67224628]), array([0.40391988]), array([0.02619354]), array([0.58992523]), array([0.3644515]), array([0.01849125]), array([0.37887342]), array([0.16598838]), array([0.00584371]), array([0.30158501]), array([0.18862295]), array([0.01356187]), array([0.6494278]), array([0.38346327]), array([0.02234521]), array([1.68534568]), array([0.88696603]), array([0.07086117]), array([0.77390046]), array([0.34739647]), array([0.01674503]), array([1.77885345]), array([0.83913467]), array([0.06401936]), array([0.4941687]), array([0.28892933]), array([0.01249134]), array([2.07984814]), array([1.03581824]), array([0.0826407]), array([0.27489828]), array([0.25016811]), array([0.0096183]), array([1.33299726]), array([0.85570351]), array([0.0593678]), array([1.41072642]), array([0.67346461]), array([0.04574047]), array([1.27320523]), array([0.68186156]), array([0.04087503]), array([0.66081565]), array([0.51876055]), array([0.02815673]), array([0.2222169]), array([0.12593691]), array([0.00390031]), array([0.61517794]), array([0.26803248]), array([0.01094851]), array([0.82312145]), array([0.32634916]), array([0.01344889]), array([0.77830571]), array([0.3559591]), array([0.01633021]), array([1.15792464]), array([0.54904981]), array([0.03475172]), array([0.37430371]), array([0.24900533]), array([0.00999615]), array([0.55528708]), array([0.3139234]), array([0.0147102]), array([1.7423283]), array([0.90380964]), array([0.07212362]), array([1.00128578]), array([0.53608236]), array([0.03074828])]
Outputs clients: [[Mul(Add.0, ExpandDims{axis=0}.0)]]

Is there a different way I can broadcast from resolution by channel to resolution by event?

Thank you!

You need to tell PyMC/PyTensor that dimension will always be 1 and should broadcast. If it’s coming from a pm.Data you can pass shape argument with None for dimensions that are allowed to change and 1 for the one that should broadcast.

If it’s coming from somewhere else you can use pt.specify_shape, again with a mix of None and 1.

Example:

import pymc as pm

with pm.Model() as m
  x = pm.Data("x", [1], shape=(1,))
  y = pm.Data("y", [1, 2, 3, 4, 5], shape=(None,))
  z = x + y

pm.draw(x)

If you didn’t specify shape=(1,), it would fail, because the shape is allowed to change across iterations, and the graph must always mean the same computation (always broadcast or never does).

The same principle applies to advanced indexing, more strict now than before.

Thanks for the advice!

I am now running into another issue which is confusing me: this fix works when resolution_bych is a fixed array. I have an extended model where instead resolution_bych comes from fitting some other calibration data. In this case, my extended model samples conditioned on two sets of data: the calibration data and the (N, 4) data. I run into an issue now when N=1, where I get the exact same error as before, despite me having specified the shape of resolution_bych, which seemed to fix the issue with resolution_bych were fixed values.

Could you suggest what the issue might be in this case?

Hard to say without seeing more of the actual code. At some point you are losing static shape information and you need to bring it back. You can use pt.specify_shape. To check what’s the static shape status of intermediate variables, print x.type.shape