Thanks, but I still get some new problem.In numpy, there are three convolve types, I use ‘full’ for my convolve, which returns an array length means len(A)+len(B)-1.But if I’m using pytensor.tensor.conv,the tensor returns a different length.I want to use the same ‘full’ convolve mode in pytensor, for example:
from pytensor.tensor import conv
import numpy as np
A=np.array([1,2,3,4,5,6,7])
B=np.array([1,2,3,4,5,6,7,0,0])
C=np.array([5,6,7])
D=np.convolve(A,C)
E=conv.causal_conv1d(A[None,None,:],C[None,None,:],filter_shape=(None,None,3),input_shape=(None,None,9)).squeeze()
F=conv.causal_conv1d(B[None,None,:],C[None,None,:],filter_shape=(None,None,3)).squeeze()
print(D)
print(E.eval())
print(F.eval())
the results are
[ 5 16 34 52 70 88 106 84 49]
[ 5 16 34 52 70 88 106]
[ 5 16 34 52 70 88 106 84 49]
It seems conv.causal_conv1d returns a tensor which has same length with input_shape ,it’s different from np.convolve. I’m wondering if I choose an error convolve mode,when using conv.causal_conv1d