Hi,
I am currently working on what is becoming quite a convoluted likelihood function which involves 1-2 inversions of 3D matrices (in numpy speak) and quite a bit of matrix-manipulation, resulting in some long NUTS-based sampling times. As such, I am looking for opportunities to make my code faster ad more efficient.
I was looking through the forum and this ->(Excessively slow evaluation?) struck me as something that might be useful, which suggests compiling functions outside the model context.
Here is a toy example:
def my_func(k_vals, eta_vals, w_shared):
det_Z_pt = 1 / (((k_vals * w_shared) ** 2) * (1 + eta_vals ** 2))
return det_Z_pt
with pm.Model() as model:
# Define the random variables
k_vals = pm.Normal("k_vals", mu=1000, sigma=100, shape=10) # Example shape 10
eta_vals = pm.Normal("eta_vals", mu=0.50, sigma=0.1, shape=10) # Example shape 10
w_shared = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) # Example shared variable
# Use PyTensor operations within the model
det_Z = my_func(k_vals, eta_vals, w_shared)
# Define a custom likelihood using det_Z
likelihood = pm.Deterministic('det_Z', det_Z)
#Then compare to observation etc....
In this code, my calculated determinant (det_Z_pt
) varies with frequency (w_shared
).
You will see that the function relies on random variables (priors) which form my likelihood function. In the example shown, there is no compiling of functions. When I try to compile it, I received errors that it was expecting arrays and not variables. I may however be doing that wrong…
To that end, if someone could help or clarify the following that would be great.
- Is it possible to compile a function that might rely on a mix of prior random variables?
- If so, any tips for incorporating priors, and also shared data etc would be great.
- In the case shown, is it necessary or beneficial in compiling or having
my_func
outside the model context? Chiefly in the sense that it may improve speed.
Hope this is sufficiently clear. I am not the most advanced of users!