I imagine you probably want just pt.linalg.solve
, not tensorsolve
. What are the dimensions of A
and BC
?
If you are going to wrap everything into an Op
, you don’t need to use pytensor functions. Basically you have two choices:
- Express everything using only pytensor functions. This means you don’t need a custom
Op
anywhere, and you will automatically get access to gradients. - Wrap your numpy code in a pytensor
Op
. In this case you don’t need to use pytensor functions, because everything in theperform
method is just python code. You will not have gradients in this case, and will have to use a non-NUTS sampler.
In either case, what you dont want to do is mix pytensor and numpy. Pick one and implement it.
That said, unless you have extremely complex control flow, exotic linear algebra routines, or something else highly specialized, you are better off picking option 1. 99% of the time people think they’re in case 2, but actually they’re in case 1. It looks like this is probably the case for you. I’d try dumping this object oriented approach in favor of something more functional, and see if you can achieve the same results by using only pytensor functions everywhere (change all np.
to pt.
). Basically, break out the class methods into regular functions. You’re not doing anything object oriented anyway, you just make an Element, call a single method, then throw it away. You can get rid of the loop inside FEpyMCsoln
by writing functions for single inputs then using pt.vectorize
to broadcast your function across a batch of inputs (it works the same as np.vectorize
).