`pm.sample_prior_predictive()` fails with incorrect size info passed to the `rng_fn` function of the RandomVariable

Hi,

We are trying to perform prior predictive sampling with a custom distribution that we built. The code is like this:

with pm.Model() as ddm_pymc:
    v = pm.Uniform("v", lower=-10.0, upper=10.0)
    a = pm.HalfNormal("a", sigma=2.0)
    z = pm.Uniform("z", lower=0.01, upper=0.99)
    t = pm.Uniform("t", lower=0.0, upper=0.6, initval=0.1)

    ddm = DDM("ddm", v=v, a=a, z=z, t=t, observed=dataset.values)
    prior_predictives = pm.sample_prior_predictive(500)

The sampling fails, and when we tried to inspect the parameters passed to RandomVariable.rng_fn(), it seems that the size argument passed to it is (1000, 2), which is exactly the size of the observed data. No matter what number of prior predictive samples that we specify. PyMC doesn’t seem to pass this info to the rng_fn function, causing this problem.

I wonder if there is any workaround?

Thanks!

Below is additional error message

TypeError: an integer is required
Apply node that caused the error: SSM_RV_rv{0, (0, 0, 0, 0), floatX, True}(RandomGeneratorSharedVariable(<Generator(PCG64) at 0x29A0AE4A0>), [1000    2], 10, v, a, z, t)
Toposort index: 4
Inputs types: [RandomGeneratorType, TensorType(int64, shape=(2,)), TensorType(int64, shape=()), TensorType(float32, shape=()), TensorType(float32, shape=()), TensorType(float32, shape=()), TensorType(float32, shape=())]
Inputs shapes: ['No shapes', (2,), (), (), (), (), ()]
Inputs strides: ['No strides', (8,), (), (), (), (), ()]
Inputs values: [Generator(PCG64) at 0x29A0AE4A0, array([1000,    2]), array(10), array(6.293975, dtype=float32), array(1.4511099, dtype=float32), array(0.6649935, dtype=float32), array(0.04510791, dtype=float32)]
Outputs clients: [['output'], ['output']]

sample_prior_predictive does not request all the draws at once. It will request each draw with the size of your RV (which in your case is (1000,2)), one at a time. By default you should end up with something like (1000, 1000, 2).

You can try to call pm.draw(ddm) to debug more quickly. That should return you one draw with the same shape as observed if you implemented it correctly.

Also you may have a multivariate RV (if that last dimension of length (2,) are not just independent RVs following the same distribution), in which case you must tell PyMC/PyTensor that ndim_supp=1. In that case PyMC will ask you to return draws with size=(1000,), and not (1000, 2). The shape should still be (1000, 2). You can read more about it here: Distribution Dimensionality — PyMC 5.6.1 documentation

Thank you, @ricardoV94! I suspect the ndim_supp is the culprit here. We had to define ndim_supp=0 even though the RandomVariable is multivariate, because all its parameters are scalars, and setting ndim_supp=1 would result in errors. I know this is kind of a hack and violates some assumptions in PyTensor, but it has worked well for us until now. Do you know if there are ways to get around this?

Multivariate variables can have scalar parameters, so it shouldn’t be an issue.

Here’s the output from trying to do mcmc sample with the script above after changing ndim_supp from 0 to 1. It seems that there are some internal checks to make sure that at least one parameter has a dimension of 1.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[3], line 7
      4 z = pm.Uniform("z", lower=0.01, upper=0.99)
      5 t = pm.Uniform("t", lower=0.0, upper=0.6, initval=0.1)
----> 7 ddm = DDM("ddm", v=v, a=a, z=z, t=t, observed=dataset.values)
      8 # prior_predictives = pm.sample_prior_predictive(500)
     10 ddm_pymc_trace = pm.sample()

File ~/HSSM/.venv/lib/python3.9/site-packages/pymc/distributions/distribution.py:308, in Distribution.__new__(cls, name, rng, dims, initval, observed, total_size, transform, *args, **kwargs)
    305     elif observed is not None:
    306         kwargs["shape"] = tuple(observed.shape)
--> 308 rv_out = cls.dist(*args, **kwargs)
    310 rv_out = model.register_rv(
    311     rv_out,
    312     name,
   (...)
    317     initval=initval,
    318 )
    320 # add in pretty-printing support

File ~/HSSM/src/hssm/distribution_utils/dist.py:276, in make_distribution.<locals>.SSMDistribution.dist(cls, **kwargs)
    272 dist_params = [
    273     pt.as_tensor_variable(pm.floatX(kwargs[param])) for param in cls.params
    274 ]
    275 other_kwargs = {k: v for k, v in kwargs.items() if k not in cls.params}
--> 276 return super().dist(dist_params, **other_kwargs)

File ~/HSSM/.venv/lib/python3.9/site-packages/pymc/distributions/distribution.py:387, in Distribution.dist(cls, dist_params, shape, **kwargs)
    385     ndim_supp = cls.rv_op(*dist_params, **kwargs).owner.op.ndim_supp
    386 create_size = find_size(shape=shape, size=size, ndim_supp=ndim_supp)
--> 387 rv_out = cls.rv_op(*dist_params, size=create_size, **kwargs)
    389 rv_out.logp = _make_nice_attr_error("rv.logp(x)", "pm.logp(rv, x)")
    390 rv_out.logcdf = _make_nice_attr_error("rv.logcdf(x)", "pm.logcdf(rv, x)")

File ~/HSSM/.venv/lib/python3.9/site-packages/pytensor/tensor/random/op.py:289, in RandomVariable.__call__(self, size, name, rng, dtype, *args, **kwargs)
    288 def __call__(self, *args, size=None, name=None, rng=None, dtype=None, **kwargs):
--> 289     res = super().__call__(rng, size, dtype, *args, **kwargs)
    291     if name is not None:
    292         res.name = name

File ~/HSSM/.venv/lib/python3.9/site-packages/pytensor/graph/op.py:295, in Op.__call__(self, *inputs, **kwargs)
    253 r"""Construct an `Apply` node using :meth:`Op.make_node` and return its outputs.
    254 
    255 This method is just a wrapper around :meth:`Op.make_node`.
   (...)
    292 
    293 """
    294 return_list = kwargs.pop("return_list", False)
--> 295 node = self.make_node(*inputs, **kwargs)
    297 if config.compute_test_value != "off":
    298     compute_test_value(node)

File ~/HSSM/.venv/lib/python3.9/site-packages/pytensor/tensor/random/op.py:334, in RandomVariable.make_node(self, rng, size, dtype, *dist_params)
    329 elif not isinstance(rng.type, RandomType):
    330     raise TypeError(
    331         "The type of rng should be an instance of either RandomGeneratorType or RandomStateType"
    332     )
--> 334 shape = self._infer_shape(size, dist_params)
    335 _, static_shape = infer_static_shape(shape)
    336 dtype = self.dtype or dtype

File ~/HSSM/.venv/lib/python3.9/site-packages/pytensor/tensor/random/op.py:210, in RandomVariable._infer_shape(self, size, dist_params, param_shapes)
    208         return size
    209     else:
--> 210         supp_shape = self._supp_shape_from_params(
    211             dist_params, param_shapes=param_shapes
    212         )
    213         return tuple(size) + tuple(supp_shape)
    215 # Broadcast the parameters

File ~/HSSM/.venv/lib/python3.9/site-packages/pytensor/tensor/random/op.py:160, in RandomVariable._supp_shape_from_params(self, dist_params, **kwargs)
    152 def _supp_shape_from_params(self, dist_params, **kwargs):
    153     """Determine the support shape of a `RandomVariable`'s output given its parameters.
    154 
    155     This does *not* consider the extra dimensions added by the `size` parameter
   (...)
    158     Defaults to `param_supp_shape_fn`.
    159     """
--> 160     return default_supp_shape_from_params(self.ndim_supp, dist_params, **kwargs)

File ~/HSSM/.venv/lib/python3.9/site-packages/pytensor/tensor/random/op.py:78, in default_supp_shape_from_params(ndim_supp, dist_params, rep_param_idx, param_shapes)
     76 ref_param = dist_params[rep_param_idx]
     77 if ref_param.ndim < ndim_supp:
---> 78     raise ValueError(
     79         "Reference parameter does not match the "
     80         f"expected dimensions; {ref_param} has less than {ndim_supp} dim(s)."
     81     )
     82 return ref_param.shape[-ndim_supp:]

ValueError: Reference parameter does not match the expected dimensions; v has less than 1 dim(s).

That’s not a very useful error message. By default PyTensor looks for some multivariate parameter to try and guess the support shape. I think we should disable that default and raise an informative message.

You need to override the _supp_shape_from_params method. In your case it’s pretty simple just (2,) as in this RV: https://github.com/pymc-labs/pymc-marketing/blob/main/pymc_marketing/clv/distributions.py#L63-L64

This guide is a bit outdated but may also be useful: Implementing a RandomVariable Distribution — PyMC 5.6.1 documentation

Thank you so much, @ricardoV94!! This is exactly what I was looking for. Really appreciate the tip!