Big tensors and good practice

Hi there,

I don’t know whether my question will make sense, but I’m trying to construct a complicated tensor involving a large array and functions like swapaxes, reshape, sum etc… The problem is that I’d like to allow a case in which that tensor is repeated several times into "sub"tensors (typically: different slices for sum) and that there’s quickly a memory issue. So, my question would be this one: is there any method / good practice to “share” a tensor variable that won’t be put into memory for each "sub"tensor calculation? Is this the idea between pytensor.shared?

Cheers,
Vian

Can you provide an illustrative example of the naive way?

Nevermind, the code was not right to start with. We can delete this topic.