# Sparse Linear System Solving Inside PyMC

Hello all.

I am currently writing up a PyMC model where at some point, I am required to solve a sparse linear system. Trouble is, while a sparse library exists for pytensor, it doesn’t appear to have its own version of scipy.sparse’s spsolve.

As a result, you have to use pytensor.tensor.slinalg.solve, which doesn’t really appear to take advantage of the sparsity at all, or scipy.sparse.linalg.spsolve, which won’t allow a pytensor variable as the right hand side.

Maybe this is more suited to a pytensor discussion, or maybe I am missing something obvious, but any help would be appreciated.

Hi Daniel,

You’re right that the solve in pytensor.tensor.slinalg doesn’t take advantage of sparsity. Also, you’re never allowed to just pass pytensor symbolic variables into functions from other packages (like scipy).

If you don’t need gradients or compiling to JAX/Numba/Pytorch, it is possible to wrap arbitrary code (such as scipy.sparse.linalg.solve) into a pytensor Op so you can use it in a PyMC model. Here’s an example:

from scipy import sparse
from pytensor.sparse import csc_dmatrix
from pytensor.compile.ops import as_op

@as_op(itypes=[csc_dmatrix, csc_dmatrix], otypes=[csc_dmatrix])
def sparse_solve(a, b):
return sparse.linalg.spsolve(a, b)

# Example from the spsolve docstring
A = sparse.csc_matrix([[3, 2, 0], [1, -1, 0], [0, 5, 1]], dtype=float)
B = sparse.csc_matrix([[2, 0], [-1, 0], [2, 0]], dtype=float)
x = sparse.linalg.spsolve(A, B)

# Showing the wrapped Op works -- you will NOT do this in a PyMC model
A_pt, B_pt = csc_dmatrix('A'), csc_dmatrix('B') # symbolic inputs
x_pt = sparse_solve(A_pt, B_pt) # symbolic output

Have a look at the graph we get. You can see that both the inputs and the outputs are sparse.

x_pt.dprint(print_type=True)

FromFunctionOp{sparse_solve} [id A] <SparseTensorType{dtype='float64', format='csc', shape=(None, None)}>
├─ SparseVariable{csc,float64} [id B] <SparseTensorType{dtype='float64', format='csc', shape=(None, None)}>
└─ SparseVariable{csc,float64} [id C] <SparseTensorType{dtype='float64', format='csc', shape=(None, None)}>

You can also compile the function into the basic C backend (PyMC would do this for you when you call pm.sample, you’ll never do it as a PyMC user)

f = pytensor.function(inputs=[A_pt, B_pt], outputs=x_pt)
np.allclose(f(A, B).todense(), x.todense)  # True

But you can’t compile to alternative backends like jax/numba/pytorch:

f = pytensor.function(inputs=[A_pt, B_pt], outputs=x_pt, mode='JAX')
# NotImplementedError: No JAX conversion for the given `Op`: FromFunctionOp{sparse_solve}

And you also can’t ask for gradients (so you can’t use e.g. the NUTS sampler)

# NotImplementedError:

Help is definitely wanted on the sparse submodule – it’s waiting for someone to come along and give it some love. @tcapretto was doing some work with sparse stuff a while back – he might know a specific place to start? For my part, I suggest you open an issue on the pytensor repo. After that, if you’re willing to help implementing sparse solve, we’d be very thankful for the contribution.

1 Like

Wow, thanks for the fantastic answer (so quickly too!). Shame to hear that such a thing does not exist yet, as right now I am not sure if I possess the knowledge to create such a thing myself. The benefit to mine and others work might outweigh the loss of spending time building it though…

I’ll keep that in mind. Thanks again.

At least open the issue! But anything you’re passionate about (read: need for your work) is a great place to start contributing to a FOSS project. You can work together with the other devs to figure out broadly what needs to be done (for example finding an implementation in another autograd package we can copy), then the rest is just details.

I’m not sure about where it’s best to start. But definitely we should start gathering all the improvements we want for the sparse module and perhaps we can apply for a NumFOCUS Small Development Grant or something similar? I would be interested in that.

1 Like