Why doesn't pytensor.stack carry over shape information?

I recently came across a bug in a model where I was seeking to pass a stack() of TensorVariables to pymc. My issue boiled down to the following example:

a = pt.TensorType("float64", (3,))()
b = pt.TensorType("float64", (3,))()
c = pt.stack([a, b])
TensorType(float64, (?, ?))

I had expected that the shape information stored in the TensorType would be (2, 3). Instead, it seems that pytensor sets the shape information to be unknown/symbolic (hopefully I’m using the correct terminology there). Is there a reason for this?

And, follow-up, is it correct practice to use pt.specify_shape() in the case where I know the fixed shape that the variable is going to take on?

Mainly asking for curiosity’s sake. (This is marked “v5” but the same behavior seems to occur at least in recent versions of aesara as well.) Thanks!

1 Like

I think @michaelosthege recently “fixed” that in Pytensor.

This is not a bug though, it’s just a less desirable behavior. (None, None) is a valid supertype for any matrix.

The reason is that static shapes are quite recent and we are still updating Ops to reflect them when possible.

If you are interested, would be nice to get a list of Ops that still lack obvious static shape output and even start improving them. We can use all the help :slight_smile:

That should be done in our PyTensor GitHub repository.

1 Like