Erros when calling a function when using pymc3

I’m using pymc3 .I use normal likelihood, my parameters are mu1 sigma1 and a. When I’m using pm.Normal as my likelihood,I need to make mu=Ft(mu1,sigma1,a).The function of Ft is to convolve array ft1 and array nor_resp, I defined it by myself, like this:

def Ft(A,sigma2,mu2,a):
    t_min=0
    t_max=(t4_s.shape[0]-nor_resp.shape[0]+1)*(t4_s[1]-t4_s[0])
    num=t4_s.shape[0]-nor_resp.shape[0]+1    
    t=np.linspace(t_min,t_max,num)
    tau=1/sigma2**2
    ft11=A*np.sqrt(tau/(2*np.pi))*np.exp(-tau/2*(t+a-mu2)**2)    
    ft1=ft11.numpy()
    signal.convolve(ft1,nor_resp, mode='full',method='auto')
    
    return ft
#%%
with pm.Model() as final_model:
    amp = pm.Uniform('amp',lower=-1.0,upper=0)
    mu1 = pm.Uniform('mu1', lower=-50,upper=50)
    sigma1 = pm.Uniform('sigma1',lower=0,upper=50)
    a1=pm.Uniform('a1',lower=-10,upper=10)    
    y_observed=pm.Normal(
        "y_observed",
        mu= Ft(amp,sigma1,mu1,a1),
        sigma=noise_sig,
        observed=v4_s,
        )
    output = pm.Deterministic('output',Ft(amp,sigma1,mu1,a1))
    prior = pm.sample_prior_predictive()
    posterior_f = pm.sample(draws = Samples, target_accept = 0.9,chains=4,cores=1)
    posterior_f = pm.sample_posterior_predictive(posterior_f)
    az.plot_trace(posterior_f, var_names = ['amp','mu1','sigma1','a1'])
    only_background=az.summary(posterior_f, var_names = ['amp','mu1','sigma1','a1'])

But I got an error, it seems to come from the convolution in Ft:

signal.convolve(ft1,nor_resp, mode='full',method='auto')

When using convolve,I need to gave two arrays to it, but when I call Ft,it seemsI passing a parameter of type TensorVariable to it.This might cause the problem.
I don’t know what to do, any advise is helpful,Thanks.

In general, you cannot use numpy or scipy functions together with pytensor tensors. This is especially true for scipy, which is often calling to compiled code under the hood. These don’t know what pytensor symbolic tensors are, and can’t do anything with them.

You need to use an appropriate function from pytensor, or write your own if it doesn’t exist. See here, here, and here for examples/discussion.

Thanks . In my problem, ft1 and nor_resp are two time series, ft1 can be written as ft1(t;sigma, tau, a),and nor_resp(t) is just a function of t. Can I use Theano to accomplish the correct convolution? Sorry, I have little experience in this area.

You can represent arbitrary recursive computation using pytensor.scan. See the docs here for details. Here is a simple rolling average of noisy data, computed using a 1d convolution with a rectangular window function:

def rolling_average(data, window_size):
    def _conv_1d(start, stop, data, kernel):
        return (data[start:stop] * kernel).sum()
    
    taps = list(range(-window_size+1, 1, 1))
    kernel = pt.full((window_size,), 1/window_size)
    rolling_average, _ = pytensor.scan(_conv_1d,
                                       sequences=[{'input':pt.arange(data.shape[0]), 'taps':[-window_size, 0]}],
                                       non_sequences=[data, kernel])
    return rolling_average

Data and result:

Untitled

For your purposes you might need a function to compute the kernel, but after that things should be roughly similar.

Scan is very general and very powerful; it’s a must-learn if you’re working with time series. Nevertheless there are also some convolution functions still kicking around in the code base from Theano’s neural net roots:

# conv is not imported automatically by default
from pytensor.tensor import conv
kernel = pt.full((window_size, ), 1/window_size)
smooth2 = conv.causal_conv1d(data[None, None, :], kernel[None, None, :], filter_shape=(1, 1, 10), input_shape=(1, 1, 100)).eval()
plt.plot(data)
plt.plot(smooth2.squeeze())

Output:
Untitled

This is equivalent to np.convolve(data, kernel, mode='full')[:data.shape[0]) Note that everything needs to be written with 3d tensors now, because these functions were written with batch dims and stacks of covolution filters in mind (think CNNs over sentences).

It would be interested to benchmark the old “optimized” C-code (used in the second example) to the modern JAX backend (which the first example could be compiled to). But I leave this as an exercise to the interested reader.

1 Like

Thanks.I’m using v3, it seems I should use theano to achieve this, what should I do?