Float32 not working with pytensor, worked with aesara

It is supported, but it’s also easy for float64 to sneak in. One place where this happens frequently is with Discrete distributions, which default to int64 and then float64 when almost any float point operation is applied to them (e.g., log).

Similarly with shape operations, as shapes are always represented as int64 by Aesara/PyTensor

If you can share the model that used to work and now fails (in a way that’s fully reproducible), someone may be able to point out the source of the problem.