The Op for true_div, real, sqrt and others does not have a C implementation

I am using a very complex function as likelihood, that makes extensive use of true_div, real, sqrt, etc. I am on a Linux machine in a dedicated conda environment, PyMC5 with jax support, everything installed with conda. What I am missing?

Try to update to the latest version of PyMC. Which one are you using?

I am using version 5.0.2

We fixed something like that recently but should be in 5.0.2 already. Do you have some code I could run to replicate? The smaller the better

Thank you Ricardo. I am working in simplifying the likelihood function in order to generate all the warnings but is not a straightforward task. Some combinations of functions seems to trigger the errors, some not. I will update in a few days.

Thank you again

Oh that’s fine, just where it if you can without any of the other model stuff that isn’t needed for the warning

I am trying to figure out which part of the code produces the warnings. Could be the sampler choice? I need SMC because it can handle degenerate cases and works well with complex numbers (the other samplers does not accept as likelihood mean functions which include complex algebra). In any case, I will provide the example code ASAP

Calling model.logp() or model.dlogp() should be enough to trigger the warnings.

Here goes the minimal model that I found raise the Op warnings. Surprisingly, model.logp does not raise any warning: just sampling does.

from numpy import pi, real, sqrt, sin, cos
import numpy as np
import pytensor.tensor as TT
import pymc as pm

def kx(ki,th,ph):
    return ki*sin(th)*cos(ph)

def ky(ki,th,ph):
    return ki*sin(th)*sin(ph)

def kz(ki,th,ep):
    return ki*sqrt(ep-sin(th)**2)

def kr(ki,th,ph):
    return np.sqrt(kx(ki,th,ph)**2+ky(ki,th,ph)**2)

def c_is(ki,th,ph,ths,phs):
    return (kx(ki,th,ph)*kx(ki,ths,phs)+ky(ki,th,ph)*ky(ki,ths,phs))/(kr(ki,th,ph)*kr(ki,ths,phs))

def beta0_v(ki,th,ep1):
    ph = 0
    a = ep1*kz(ki,th,1)-kz(ki,th,ep1)
    b = ep1*kz(ki,th,1)+kz(ki,th,ep1)
    return a/b

def beta1_v(ki,th,ph,ths,phs,ep0,ep1):
    return -2*1j*kz(ki,th,ep0)*(ki**2-ep1*ki**2)/((ep1*kz(ki,ths,ep0)+kz(ki,ths,ep1))*(ep1*kz(ki,th,ep0)+kz(ki,th,ep1)))*\
        (ep1*kr(ki,th,ph)*kr(ki,ths,phs)/ki**2-kz(ki,th,ep1)*kz(ki,th,ep1)*c_is(ki,th,ph,ths,phs)/ki**2)

def w(s,l,k1,k2):
    return s**2*l**2/(4*np.pi)*np.exp(-0.25*l**2*(k1**2+k2**2))

def T0(ki,th,ep1,za,s):
    ph = 0
    cuenta = 1 + 2*np.cos(2*kz(ki,th,1)*za)*beta0_v(ki,th,ep1)+np.abs(beta0_v(ki,th,ep1))**2
    return cuenta

def T1(rx,ry,ki,th,ep1,s,l):
    ph, phs = 0, 0
    ep0 = 1
    beta1 = beta1_v(ki,th,ph,th,phs,ep0,ep1)
    cuenta = np.abs(beta1)**2*w(s,l,rx-kx(ki,th,ph),ry-ky(ki,th,ph))
    return cuenta

def generadorS0_1capa_nuevo_simplificado2(thi,ep1,s,l,ZA,Wt,J,RX_,RY_):
    landa = 0.19
    k0 = 2*np.pi/landa
    I_0 = T0(k0,thi,ep1,ZA,s)
    aux_1 = Wt*T1(RX_,RY_,k0,thi,ep1,s,l)
    I_1 = J*aux_1.sum()            
    I_t = I_0 + I_1
    s0s = 10*np.log10(I_t)
    return s0s

def patron_simplificado2(ep,s,l):
    landa = 0.19
    k0 = 2*np.pi/landa
    phi = 0
    ph = phi
    s0 = []
    for th_ in thi:
      s0.append(generadorS0_1capa_nuevo_simplificado2(th_,ep,s,l,ZA,Wt,J,RX_,RY_))
    nn = len(thi)
    a = TT.zeros(nn)
    for i in range(nn):
        a = TT.set_subtensor(a[i], s0[i])
    return a

def int_gauss(n,k_lim):
    beta = np.zeros(n,dtype=float)
    for k in range(1,n+1):
        beta[k-1] = 0.5/np.sqrt(1-(2*(k))**(-2))
    m = n+1
    T_low = np.zeros((m,m))
    T_up = np.zeros((m,m))
    T = np.zeros((m,m))
    # defino T_low
    for i in range(0,m):
        for j in range(0,m):
            if i==j+1:
                T_low[i,j]=beta[i-1]
    # defino T_up
    for i in range(0,m):
        for j in range(0,m):
            if j==i+1:
                T_up[i,j]=beta[i]

    T = T_low + T_up        
    d_,V = np.linalg.eig(T)
    D = np.zeros((m,m))

    for i in range(0,m):
        for j in range(0,m):
            if i==j:
                D[i,j]= k_lim*d_[i]

    W = (2*V[0,:]**2)
    Wt = np.kron(W,W)

    r = k_lim*d_
    X_IG,Y_IG = np.meshgrid(r,r)

    return {'Wt': Wt, 'X_IG': X_IG, 'Y_IG': Y_IG, 'm_gauss': m}

# simulated inputs set

epsilon = 7
ssup = 0.006 #0.006
lsup = 0.1
ZA = 3.0
discretizacion = 80
thi = np.linspace(62*pi/180,82*pi/180, discretizacion)
entrada = [epsilon,ssup,lsup]
landa = 0.19
k0 = 2*np.pi/landa
n_gauss = 20
k_lim = 1.5*k0
m_gauss = int_gauss(n_gauss,k_lim)['m_gauss']
Wt = int_gauss(n_gauss,k_lim)['Wt']
J = k_lim**2
RX = int_gauss(n_gauss,k_lim)['X_IG']
RY = int_gauss(n_gauss,k_lim)['Y_IG']
RX_ = np.reshape(RX,(1,m_gauss**2))
RY_ = np.reshape(RY,(1,m_gauss**2))
s0_1 = patron_simplificado2(epsilon,ssup,lsup)
noise = 0.5 * np.random.randn(len(thi))

pert = s0_1.eval() + noise
t0 = time.time()
medidos = pert
errores = 1
samples = 100
RANDOM_SEED = 58

ranges = {}
ranges['ep'] = 4, 40
ranges['s'] = 0.001, 0.01
ranges['l'] = 0.06, 0.15

with pm.Model() as model:
    ep = pm.Uniform('ep', ranges['ep'][0], ranges['ep'][1])
    s = pm.Uniform('s', ranges['s'][0], ranges['s'][1])
    l = pm.Uniform('l', ranges['l'][0], ranges['l'][1])

def f(ep = ep,
        s = s,
        l = l):
    return patron_simplificado2(ep,s,l)

with model:
    function_pm = pm.Deterministic('s0f', f())
    observations = pm.Normal( "obs",  mu=real(function_pm), sigma=errores, observed=medidos)

with model:
    model.logp()

with model:
    trace = pm.sample_smc(samples, cores=4)

I am sorry for the long model, it is a physically-based scattering model and is the simples one that produces the warnings

Sorry model.compile_logp() or model.compile_dlogp() is what should trigger the warnings

I just tried it. Compile only raised a warning for ‘real’, while the actual sampling for sqrt, real, true div and the others

I suggest you try to install PyMC 5.1.1, we had a spurious warning like this that got fixed.

I ran your example and I only get the warning for real, which indeed doesn’t have a C implementation.

Thank you again! I installed PyMC 5.1.1 and got only the warning for real as you said during compilation. However, there is this error in sampling:

Exception in thread Thread-6 (_handle_results):
Traceback (most recent call last):
  File "/home/tele/miniconda3/envs/pymc5/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/home/tele/miniconda3/envs/pymc5/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/home/tele/miniconda3/envs/pymc5/lib/python3.11/multiprocessing/pool.py", line 579, in _handle_results
    task = get()
           ^^^^^
  File "/home/tele/miniconda3/envs/pymc5/lib/python3.11/multiprocessing/connection.py", line 250, in recv
    return _ForkingPickler.loads(buf.getbuffer())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Timeout.__init__() missing 1 required positional argument: 'lock_file'
1 Like

What is your operating system? We usually have multiprocessing issues on Windows.

How did you install PyMC?

I am using the last Ubuntu LTS. I installed PyMC using conda-forge in a separate environment. If you need some other detail please let my know.

In the meantime, I am running everything with PyMC3 with Theano, but I am very interested to migrate everything to PyMC5.

Just for sanity, can you check if it works on Python3.10 instead?

I am sorry, it took me a while to make it work with Python 3.10 (conda install python=3.10 raised a lot of Found conflicts! Looking for incompatible packages, too many to copied here).

Essentially it raises the same error:

Initializing SMC sampler...
Sampling 4 chains in 4 jobs
/home/fran/miniconda3/envs/PyMC5/lib/python3.11/site-packages/pytensor/tensor/rewriting/elemwise.py:680: UserWarning: Optimization Warning: The Op real does not provide a C implementation. As well as being potentially slow, this also disables loop fusion.
  warn(
/home/fran/miniconda3/envs/PyMC5/lib/python3.11/site-packages/pytensor/tensor/rewriting/elemwise.py:680: UserWarning: Optimization Warning: The Op real does not provide a C implementation. As well as being potentially slow, this also disables loop fusion.
  warn(
/home/fran/miniconda3/envs/PyMC5/lib/python3.11/site-packages/pytensor/tensor/rewriting/elemwise.py:680: UserWarning: Optimization Warning: The Op real does not provide a C implementation. As well as being potentially slow, this also disables loop fusion.
  warn(
/home/fran/miniconda3/envs/PyMC5/lib/python3.11/site-packages/pytensor/tensor/rewriting/elemwise.py:680: UserWarning: Optimization Warning: The Op real does not provide a C implementation. As well as being potentially slow, this also disables loop fusion.
  warn(
Exception in thread Thread-3 (_handle_results):
Traceback (most recent call last):
  File "/home/fran/miniconda3/envs/PyMC5/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/home/fran/miniconda3/envs/PyMC5/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/home/fran/miniconda3/envs/PyMC5/lib/python3.11/multiprocessing/pool.py", line 579, in _handle_results
    task = get()
           ^^^^^
  File "/home/fran/miniconda3/envs/PyMC5/lib/python3.11/multiprocessing/connection.py", line 250, in recv
    return _ForkingPickler.loads(buf.getbuffer())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Timeout.__init__() missing 1 required positional argument: 'lock_file'

Your traceback shows Python 3.11 is being used.

Also make sure to use the conda-forge channel.
Should be conda install -c conda-forge pymc

You are right of course! I am following these steps:

  1. create the environment and install PyMC with:
conda create -c conda-forge -n pymc_env "pymc>=4"
conda activate pymc_env
  1. Then a install python 3.10 using

conda install python=3.10

Is this OK?

I think you want to specify the python version first.

conda create -n pymc_env python=3.10
conda activate pymc_env
conda install -c conda-forge "pymc>=5"