Erros when calling a function when using pymc3

I’m using pymc3 .I use normal likelihood, my parameters are mu1 sigma1 and a. When I’m using pm.Normal as my likelihood,I need to make mu=Ft(mu1,sigma1,a).The function of Ft is to convolve array ft1 and array nor_resp, I defined it by myself, like this:

def Ft(A,sigma2,mu2,a):
    signal.convolve(ft1,nor_resp, mode='full',method='auto')
    return ft
with pm.Model() as final_model:
    amp = pm.Uniform('amp',lower=-1.0,upper=0)
    mu1 = pm.Uniform('mu1', lower=-50,upper=50)
    sigma1 = pm.Uniform('sigma1',lower=0,upper=50)
        mu= Ft(amp,sigma1,mu1,a1),
    output = pm.Deterministic('output',Ft(amp,sigma1,mu1,a1))
    prior = pm.sample_prior_predictive()
    posterior_f = pm.sample(draws = Samples, target_accept = 0.9,chains=4,cores=1)
    posterior_f = pm.sample_posterior_predictive(posterior_f)
    az.plot_trace(posterior_f, var_names = ['amp','mu1','sigma1','a1'])
    only_background=az.summary(posterior_f, var_names = ['amp','mu1','sigma1','a1'])

But I got an error, it seems to come from the convolution in Ft:

signal.convolve(ft1,nor_resp, mode='full',method='auto')

When using convolve,I need to gave two arrays to it, but when I call Ft,it seemsI passing a parameter of type TensorVariable to it.This might cause the problem.
I don’t know what to do, any advise is helpful,Thanks.

In general, you cannot use numpy or scipy functions together with pytensor tensors. This is especially true for scipy, which is often calling to compiled code under the hood. These don’t know what pytensor symbolic tensors are, and can’t do anything with them.

You need to use an appropriate function from pytensor, or write your own if it doesn’t exist. See here, here, and here for examples/discussion.

Thanks . In my problem, ft1 and nor_resp are two time series, ft1 can be written as ft1(t;sigma, tau, a),and nor_resp(t) is just a function of t. Can I use Theano to accomplish the correct convolution? Sorry, I have little experience in this area.

You can represent arbitrary recursive computation using pytensor.scan. See the docs here for details. Here is a simple rolling average of noisy data, computed using a 1d convolution with a rectangular window function:

def rolling_average(data, window_size):
    def _conv_1d(start, stop, data, kernel):
        return (data[start:stop] * kernel).sum()
    taps = list(range(-window_size+1, 1, 1))
    kernel = pt.full((window_size,), 1/window_size)
    rolling_average, _ = pytensor.scan(_conv_1d,
                                       sequences=[{'input':pt.arange(data.shape[0]), 'taps':[-window_size, 0]}],
                                       non_sequences=[data, kernel])
    return rolling_average

Data and result:


For your purposes you might need a function to compute the kernel, but after that things should be roughly similar.

Scan is very general and very powerful; it’s a must-learn if you’re working with time series. Nevertheless there are also some convolution functions still kicking around in the code base from Theano’s neural net roots:

# conv is not imported automatically by default
from pytensor.tensor import conv
kernel = pt.full((window_size, ), 1/window_size)
smooth2 = conv.causal_conv1d(data[None, None, :], kernel[None, None, :], filter_shape=(1, 1, 10), input_shape=(1, 1, 100)).eval()


This is equivalent to np.convolve(data, kernel, mode='full')[:data.shape[0]) Note that everything needs to be written with 3d tensors now, because these functions were written with batch dims and stacks of covolution filters in mind (think CNNs over sentences).

It would be interested to benchmark the old “optimized” C-code (used in the second example) to the modern JAX backend (which the first example could be compiled to). But I leave this as an exercise to the interested reader.

1 Like

Thanks, but I still get some new problem.In numpy, there are three convolve types, I use ‘full’ for my convolve, which returns an array length means len(A)+len(B)-1.But if I’m using pytensor.tensor.conv,the tensor returns a different length.I want to use the same ‘full’ convolve mode in pytensor, for example:

from pytensor.tensor import conv
import numpy as np

the results are

[ 5  16  34  52  70  88 106  84  49]
[  5  16  34  52  70  88 106]
[  5  16  34  52  70  88 106  84  49]

It seems conv.causal_conv1d returns a tensor which has same length with input_shape ,it’s different from np.convolve. I’m wondering if I choose an error convolve mode,when using conv.causal_conv1d

There are several convolution functions in pytensor.tensor.conv, but this module is not very well documented nor actively maintained. You will need to do some investigation and see if any of them suit your purpose. If none do, you can use a scan to implement exactly the behavior you desire.

We’re very eager for documentation contributions, though. If you do some research and want to contribute some explanations on how each function works in the form of docstrings or examples, please don’t hesitate to open a PR.

On that note, there’s a PyData Sprint for first-time contributions, so you can come meet the community team over the next 2 days and get some pointers on how to contribute. See here and here for our entries on the schedule.

Thanks,I will try to accomplish using scan and find out if those functions will work the same as numpy or spicy , but I’m not familiar with scan,I will try.