Incorperating numerical approximations into model

I am trying to constrain paramters of a model given by the class model through just a simple sampling for now; however, the model is given by bessel functions of complex order which do not have an analytic solution. As such numerical approximations are required, however, I am running into issues where I get the error TypeError: Variables do not support boolean operations. Since the theano tensorvariable is abstract, it can’t do this operation. I was thinking it could just be initiated somehow for when the actual sampling takes place. Alternatively if you believe there is a better approach let me know.

def cbesselj2(nu, x): #Arguments are complex arrays 
    kMax = 100
    tol = 1e-2
    x = np.array(x).reshape(-1, 1)
    Jnu = np.zeros_like(x, dtype=complex)
    for m in range(len(x)):
        if np.isreal(nu):
            Jnu[m] = jv(nu, x[m])
            if (x[m]) < 30:
                Jnu[m] = 0.0 + 0j
                for k in range(kMax + 1):
                    JnuLast = copy.deepcopy(Jnu[m])
                    Jnu[m] += (-0.25 * x[m] * x[m]) ** k / (
                        gamma(k + 1) * gamma(nu + k + 1)
                    if np.real(Jnu[m]) != 0:
                        Rerr = abs(
                            (np.real(Jnu[m]) - np.real(JnuLast)) / np.real(Jnu[m])
                    if np.imag(Jnu[m]) != 0:
                        Ierr = abs(
                            (np.imag(Jnu[m]) - np.imag(JnuLast)) / np.imag(Jnu[m])
                    if "Ierr" in locals() and "Rerr" in locals():
                        if np.max(Rerr) <= tol and np.max(Ierr) <= tol:
                    elif "Rerr" in locals():
                        if np.max(Rerr) <= tol:
                Jnu[m] = Jnu[m] * (0.5 * x[m]) ** nu
                if np.real(k) == kMax:
                        "Algorithm does not converge in the calculation of bessel function. Maximum concurence number arrived!"
                Jnu[m] = np.sqrt(2 / (np.pi * x[m])) * np.cos(
                    x[m] - nu * (np.pi / 2) - (np.pi / 4)
    Jnu = Jnu.flatten()
    return Jnu

class model(object):
    def __init__(self, Vai, h, epsilon, interval):
        self.Vai = Vai
        self.h = h
        self.epsilon = epsilon = np.arange(interval, 8 + interval, interval) * 2 * np.pi

    def x0(self):
        return 2 * self.h * / self.Vai

    def r_bayes(self):
        x0 = self.x0()  # Get x array, this will return a 2D array. each sample of the mcmc of which each 
#an array of complex numbers
        num2 = cbesselj2(1j * x0 * self.epsilon, x0) #Call complex order bessel function approximation, both arguments are complex arrays
        return num2

with pm.Model() as bayes_model:
    Vai = pm.Normal("Vai")
    h = pm.Normal("h")
    epsilon = pm.Normal("epsilon")
    model_lysak = model(Vai, h, epsilon, 0.01)
    outcome_azi = pm.Normal(
        mu=model_lysak.r_bayes(),,  # observed is just to see if it will compile, relies on actual data

You can’t mix numpy/scipy with pytensor (the computational backend for PyMC) – this is the source of your error.

To fix it, you have several options:

  1. Wrap your optimizer into a pytensor Op, see here for example. The cost of this will be that you can’t use NUTS, because your optimizer Op won’t have gradients.

  2. You could also try to re-write your optimizer using only pytensor functions, see here for an example of multivariate Newton’s method with line search.

  3. Final option, write your model in JAX and use jaxopt, then wrap the jax model into pytensor: see here and here for examples doing this.

The jax.scipy.gamma function does not accept complex values so 3 doesnt work. I will try seeing how suggestion 2 as it seems the most robust but also the most complicated.

Complex numbers are also going to be hard in base pytensor. You will have to treat them as 2d vectors, because you can’t get complex gradients by default, see here for some discussion. JAX does handle them though, so it’s surprising to me that their gamma function can’t handle them.You might be best off with option 1 as a first pass.

Hey Jesse,

The number which is returned in mu will be real, but in doing all the calculations every constituent calculation is complex. Just to reiterate all inputs are real and outputs (that I am looking at) are reals, but the calculations are complex.

I will attempt to get the blackbox model without grad working first and then look into the grad more. Is the gradient of the resulting value taken or will it take the gradient of all the intermediate values too. I will report back if I have difficulties.

Okay I got it working but its extremely slow. It hasnt finished 1 one iteration over 2 hours. It keeps stalling at -6.80e14 as seen with the updated script. I’m wondering if i should normalize the model and data arrays as well.

alfven_data.csv (5.8 KB)

Last update, by dividing the model and data by the max and normalizing the loglikelyhood, it seems to work now. About an innovation a minute which aint bad. I’ve updated the python script in case anyone else needs to take a look. (8.3 KB)

You could try jitting your cbesselj2 function with numba.njit to try to speed things up. I assume that’s what you’re calling inside the blackbox Op.