Thank you for all the help, this was very useful. I am trying to generalize the function to the case where the further parameters can be vectors or matrices, e.g. for the following case:
y_obs = 8.3
start = aesara.shared(1.)
stop = aesara.shared(2.)
with pm.Model() as basic_model:
a = pm.Uniform('a', 1.5, 3.5)
b = pm.Uniform(
'b',
4., 6.,
shape=(3)
)
# Define the function to integrate in plain pytensor
t = at.dscalar('t')
t.tag.test_value = np.zeros(())
a_ = at.dscalar('a_')
a_.tag.test_value = np.ones(())*2.
b_ = at.dvector('b_')
b_.tag.test_value = np.ones((3))*5.
func = t**a_ + b_.sum()
integrate = Integrate(
# Function we're integrating
func,
# variable we're integrating
t,
# other variables
a_, b_
)
# Now we plug in the values from the model.
# The `a_` and `b_` from above corresponds
# to the `a` and `b` here.
mu = integrate(
start,
stop,
a,
b
)
y = pm.Normal(
'y',
mu=mu,
sigma=0.4,
observed=y_obs
)
trace = pm.sample(
1500,
tune=500,
cores=2,
chains=2,
return_inferencedata=True
)
When I run the above with the Integrate Op as defined above in this thread, I get the following error:
ValueError: Integrate.grad returned a term with 0 dimensions, but 1 are required.
I assume we should specify somewhere that the gradient of b is 1-d now, but where should that happen? Thanks!