Convolution with scipy and shared() gives weird results

Hello everyone.

While trying to do 1d convolution in pymc3 (see thread) i tried to use signal.convolve from scipy. If signal.convolve is used with just the data and a function as input we get an error because the volume and kernel should have the same dimensionality, but if we define the data as a shared variable we are able to sample.

from scipy import signal
from theano import shared

xtest=np.linspace(0,10,100)
filt=2.0*xtest
conv=signal.convolve(xtest, filt, mode='same')

with pm.Model() as model:
xtestS = shared(xtest)
A = pm.Normal('A', mu=2.0, sigma=1.0)
filtt = A*xtestS
convol=signal.convolve(xtestS, filtt, mode='same')

likelihood = pm.Normal('conv', mu=convol, observed=conv)

trace = pm.sample(1000)

But the sampling of the parameter A is way off

||mean|sd|mc_error|hpd_2.5|hpd_97.5|n_eff|Rhat|
|A|66.24263|14.321884|1.430495|40.510827|78.62074|10.183963|1.007146|

I am curious if anyone has any idea about what is going on here. My initial guess would be that scipy convolution does not understand the structure of the shared(data) and only does the convolution for one element of the array or something to that effect.