I am having trouble defining the gradient of a deterministic random variable.

The covariance matrix in my model depends on a parameter tau. I am given a precomputed covariance matrix which contains floats, +infs and -infs. Then I want to replace the +inf entries with 0, the -inf entries with tau and other entries x with tau - x.

To use NUTS, I tried to implement this as a Theano Op:

```
class CovMatrixOp(th.gof.Op):
itypes = [tt.dscalar]
otypes = [tt.dmatrix]
def __init__(self, *args, **kwargs):
super(CovMatrixOp, self).__init__(*args, **kwargs)
def perform(self, node, inputs, outputs):
tau = inputs[0]
def helper(cell):
if cell == np.Inf:
return 0
if cell == np.NINF:
return tau
return tau - cell
outputs[0][0] = np.vectorize(helper)(covmPrecomputed)
def grad(self, inputs, output_gradients):
tau = inputs[0] # not even necessary
def helper(cell):
if cell == np.Inf:
return 0
if cell == np.NINF:
return 1
return 1
return [np.vectorize(helper)(covmPrecomputed)]
```

In the model context, I then call:

covm = CovMatrixOp()(0.1+tau_)

and use this as the covariance matrix of a MvNormal.

However, initializing NUTS fails and when I use MAP, it says that the gradient is not available.

I’d be very helpful for any suggestions!