Suggested workaround for blackbox likelihood example?

Hi there,

Like several others seem to have brought up on discourse in the last few days, I am trying and failing to implement a blackbox likelihood following the example here. I have implemented an analogous notebook that removes any use of cython. I get the same error that many other users have reported when creating a DensityDist passing in random variables to the “observed” argument, something of the form:

MissingInputError: Input 0 of the graph (indices start from 0), 
used to compute Elemwise{exp,no_inplace}(a_log__), 
was not provided and not given a value. Use the Theano flag 
exception_verbosity='high', for more information on this error.

The answers (particularly from @OriolAbril, who seems to be solving this problem - thank you!) seem to indicate this used to work but now doesn’t due to an update in Arviz; other answers on stackexchange point to it working on python <=3.7. I have a somewhat urgent need for this functionality, so I am hoping to get one of the following:

  1. A set of python, theano, arviz, and pymc3 versions which will allow the notebook (minus cython) to function properly
  2. A set of installation instructions for the pull request that nominally fixes the problem with the new givens argument. At the moment, I naively pip install . in the cloned directory, but when I run my code, I receive the error
('Compilation failed (return status=1): /home/daniel/.theano/compiledir_Linux-4.15--generic-x86_64-with-glibc2.10-x86_64-3.8.5-64/tmpv87v705x/mod.cpp: In member function ‘int {anonymous}::__struct_compiled_op_m54e9b53fd5a613c92589a750a7a248ba3380081b3d680b48ff11fd385ae8bc6a::run()’:. /home/daniel/.theano/compiledir_Linux-4.15--generic-x86_64-with-glibc2.10-x86_64-3.8.5-64/tmpv87v705x/mod.cpp:459:60: error: expected primary-expression before ‘{’ token.                              #pragma omp parallel for if(n>={int(config.openmp_elemwise_minsize)}).                                                             ^. At global scope:. cc1plus: warning: unrecognized command line option ‘-Wno-c++11-narrowing’. ', 'FunctionGraph(Elemwise{mul,no_inplace}(<TensorType(float64, (True,))>, TensorConstant{[ 0.000000..06781e-01]}))')

which I don’t understand, but hopefully someone here does. This is on python 3.8.5, theano 1.1.0, and pymc3 version 3.11.0 by way of the pull request. Please let me know if other information would be useful.

Thank you, and I can’t wait until the new functionality is in the stable release!
-Daniel

1 Like

Try updating to latest arviz and pymc3 and use idata_kwargs={"density_dist_obs": False} in pm.sample.

It looks like you already have latest pymc3, ypu may even have latest ArviZ

Thanks, I’ll try that, though I should mention that this error is occurring before the call to pm.sample(), and instead happens at the creation of the DensityDist.

1 Like

Then this is a new error I have yet to see, can you create a minimal example to reproduce it?

Hi again,

Here is the heavily simplified subset of the black box likelihood example I am using, which I believe ought to be functionally similar to the original notebook, with the exception of the new idata_kwargs argument in pm.sample:

import numpy as np
import pymc3 as pm
import theano
import theano.tensor as tt


# define your super-complicated model that uses loads of external codes
def my_model(theta, x):
    m, c = theta
    return m*x + x

# define your really-complicated likelihood function that uses loads of external codes
def my_loglike(theta, x, data, sigma):

    model = my_model(theta, x)

    return -(0.5/sigma**2)*np.sum((data - model)**2)


def gradients(vals, func, releps=1e-3, abseps=None, mineps=1e-9, reltol=1e-3,
              epsscale=0.5):

    grads = np.zeros(len(vals))

    # maximum number of times the gradient can change sign
    flipflopmax = 10.

    # set steps
    if abseps is None:
        if isinstance(releps, float):
            eps = np.abs(vals)*releps
            eps[eps == 0.] = releps  # if any values are zero set eps to releps
            teps = releps*np.ones(len(vals))
        elif isinstance(releps, (list, np.ndarray)):
            if len(releps) != len(vals):
                raise ValueError("Problem with input relative step sizes")
            eps = np.multiply(np.abs(vals), releps)
            eps[eps == 0.] = np.array(releps)[eps == 0.]
            teps = releps
        else:
            raise RuntimeError("Relative step sizes are not a recognised type!")
    else:
        if isinstance(abseps, float):
            eps = abseps*np.ones(len(vals))
        elif isinstance(abseps, (list, np.ndarray)):
            if len(abseps) != len(vals):
                raise ValueError("Problem with input absolute step sizes")
            eps = np.array(abseps)
        else:
            raise RuntimeError("Absolute step sizes are not a recognised type!")
        teps = eps

    # for each value in vals calculate the gradient
    count = 0
    for i in range(len(vals)):
        # initial parameter diffs
        leps = eps[i]
        cureps = teps[i]

        flipflop = 0

        # get central finite difference
        fvals = np.copy(vals)
        bvals = np.copy(vals)

        # central difference
        fvals[i] += 0.5*leps  # change forwards distance to half eps
        bvals[i] -= 0.5*leps  # change backwards distance to half eps
        cdiff = (func(fvals)-func(bvals))/leps

        while 1:
            fvals[i] -= 0.5*leps  # remove old step
            bvals[i] += 0.5*leps

            # change the difference by a factor of two
            cureps *= epsscale
            if cureps < mineps or flipflop > flipflopmax:
                # if no convergence set flat derivative (TODO: check if there is a better thing to do instead)
                warnings.warn("Derivative calculation did not converge: setting flat derivative.")
                grads[count] = 0.
                break
            leps *= epsscale

            # central difference
            fvals[i] += 0.5*leps  # change forwards distance to half eps
            bvals[i] -= 0.5*leps  # change backwards distance to half eps
            cdiffnew = (func(fvals)-func(bvals))/leps

            if cdiffnew == cdiff:
                grads[count] = cdiff
                break

            # check whether previous diff and current diff are the same within reltol
            rat = (cdiff/cdiffnew)
            if np.isfinite(rat) and rat > 0.:
                # gradient has not changed sign
                if np.abs(1.-rat) < reltol:
                    grads[count] = cdiffnew
                    break
                else:
                    cdiff = cdiffnew
                    continue
            else:
                cdiff = cdiffnew
                flipflop += 1
                continue

        count += 1

    return grads

# define a theano Op for our likelihood function
class LogLikeWithGrad(tt.Op):

    itypes = [tt.dvector]  # expects a vector of parameter values when called
    otypes = [tt.dscalar]  # outputs a single scalar value (the log likelihood)

    def __init__(self, loglike, data, x, sigma):
        
        # add inputs as class attributes
        self.likelihood = loglike
        self.data = data
        self.x = x
        self.sigma = sigma

        # initialise the gradient Op (below)
        self.logpgrad = LogLikeGrad(self.likelihood, self.data, self.x, self.sigma)

    def perform(self, node, inputs, outputs):
        # the method that is used when calling the Op
        (theta,) = inputs  # this will contain my variables

        # call the log-likelihood function
        logl = self.likelihood(theta, self.x, self.data, self.sigma)

        outputs[0][0] = np.array(logl)  # output the log-likelihood

    def grad(self, inputs, g):
        # the method that calculates the gradients - it actually returns the
        # vector-Jacobian product - g[0] is a vector of parameter values
        (theta,) = inputs  # our parameters
        return [g[0] * self.logpgrad(theta)]


class LogLikeGrad(tt.Op):

    """
    This Op will be called with a vector of values and also return a vector of
    values - the gradients in each dimension.
    """

    itypes = [tt.dvector]
    otypes = [tt.dvector]

    def __init__(self, loglike, data, x, sigma):
        """
        Initialise with various things that the function requires. Below
        are the things that are needed in this particular example.

        Parameters
        ----------
        loglike:
            The log-likelihood (or whatever) function we've defined
        data:
            The "observed" data that our log-likelihood function takes in
        x:
            The dependent variable (aka 'x') that our model requires
        sigma:
            The noise standard deviation that out function requires.
        """

        # add inputs as class attributes
        self.likelihood = loglike
        self.data = data
        self.x = x
        self.sigma = sigma

    def perform(self, node, inputs, outputs):
        (theta,) = inputs

        # define version of likelihood function to pass to derivative function
        def lnlike(values):
            return self.likelihood(values, self.x, self.data, self.sigma)

        # calculate gradients
        grads = gradients(theta, lnlike)

        outputs[0][0] = grads


# set up our data
N = 10  # number of data points
sigma = 1.0  # standard deviation of noise
x = np.linspace(0.0, 9.0, N)

mtrue = 0.4  # true gradient
ctrue = 3.0  # true y-intercept

truemodel = my_model([mtrue, ctrue], x)

# make data
np.random.seed(716742)  # set random seed, so the data is reproducible each time
data = sigma * np.random.randn(N) + truemodel

ndraws = 3000  # number of draws from the distribution
nburn = 1000  # number of "burn-in points" (which we'll discard)

# create our Op
logl = LogLikeWithGrad(my_loglike, data, x, sigma)

# use PyMC3 to sampler from log-likelihood
with pm.Model() as opmodel:
    # uniform priors on m and c
    m = pm.Uniform("m", lower=-10.0, upper=10.0)
    c = pm.Uniform("c", lower=-10.0, upper=10.0)

    # convert m and c to a tensor vector
    theta = tt.as_tensor_variable([m, c])

    # use a DensityDist
    pm.DensityDist("likelihood", lambda v: logl(v), observed={"v": theta})

    trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, idata_kwargs ={'density_dist_obs':False})


# plot the traces
_ = pm.traceplot(trace, lines={"m": mtrue, "c": ctrue})

# put the chains in an array (for later!)
samples_pymc3_2 = np.vstack((trace["m"], trace["c"])).T

I run this code in a newish environment with python 3.8.5, arviz 0.11.0, pymc3 3.11.0 (though I installed this pymc3 from source downloaded from what I believe is your current pull request, mentioned in OP). I get this error traceback, which seems to be a theano compilation error:

Exception                                 Traceback (most recent call last)
<ipython-input-1-42e88888c7e0> in <module>
    279     pm.DensityDist("likelihood", lambda v: logl(v), observed={"v": theta})
    280 
--> 281     trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, idata_kwargs ={'density_dist_obs':False})
    282 
    283 # plot the traces

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/sampling.py in sample(draws, step, init, n_init, start, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, callback, jitter_max_retries, return_inferencedata, idata_kwargs, mp_ctx, pickle_backend, **kwargs)
    492             # By default, try to use NUTS
    493             _log.info("Auto-assigning NUTS sampler...")
--> 494             start_, step = init_nuts(
    495                 init=init,
    496                 chains=chains,

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/sampling.py in init_nuts(init, chains, n_init, model, random_seed, progressbar, jitter_max_retries, **kwargs)
   2183         raise ValueError(f"Unknown initializer: {init}.")
   2184 
-> 2185     step = pm.NUTS(potential=potential, model=model, **kwargs)
   2186 
   2187     return start, step

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/step_methods/hmc/nuts.py in __init__(self, vars, max_treedepth, early_max_treedepth, **kwargs)
    166         `pm.sample` to the desired number of tuning steps.
    167         """
--> 168         super().__init__(vars, **kwargs)
    169 
    170         self.max_treedepth = max_treedepth

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/step_methods/hmc/base_hmc.py in __init__(self, vars, scaling, step_scale, is_cov, model, blocked, potential, dtype, Emax, target_accept, gamma, k, t0, adapt_step_size, step_rand, **theano_kwargs)
     86         vars = inputvars(vars)
     87 
---> 88         super().__init__(vars, blocked=blocked, model=model, dtype=dtype, **theano_kwargs)
     89 
     90         self.adapt_step_size = adapt_step_size

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/step_methods/arraystep.py in __init__(self, vars, model, blocked, dtype, logp_dlogp_func, **theano_kwargs)
    252 
    253         if logp_dlogp_func is None:
--> 254             func = model.logp_dlogp_function(vars, dtype=dtype, **theano_kwargs)
    255         else:
    256             func = logp_dlogp_func

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/model.py in logp_dlogp_function(self, grad_vars, tempered, **kwargs)
   1001         varnames = [var.name for var in grad_vars]
   1002         extra_vars = [var for var in self.free_RVs if var.name not in varnames]
-> 1003         return ValueGradFunction(costs, grad_vars, extra_vars, **kwargs)
   1004 
   1005     @property

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/pymc3/model.py in __init__(self, costs, grad_vars, extra_vars, dtype, casting, compute_grads, **kwargs)
    690 
    691         if compute_grads:
--> 692             grad = tt.grad(self._cost_joined, self._vars_joined)
    693             grad.name = "__grad"
    694             outputs = [self._cost_joined, grad]

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in grad(cost, wrt, consider_constant, disconnected_inputs, add_names, known_grads, return_disconnected, null_gradients)
    637             assert g.type.dtype in theano.tensor.float_dtypes
    638 
--> 639     rval = _populate_grad_dict(var_to_app_to_idx, grad_dict, wrt, cost_name)
    640 
    641     for i in range(len(rval)):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in _populate_grad_dict(var_to_app_to_idx, grad_dict, wrt, cost_name)
   1438         return grad_dict[var]
   1439 
-> 1440     rval = [access_grad_cache(elem) for elem in wrt]
   1441 
   1442     return rval

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1438         return grad_dict[var]
   1439 
-> 1440     rval = [access_grad_cache(elem) for elem in wrt]
   1441 
   1442     return rval

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in <listcomp>(.0)
   1059             inputs = node.inputs
   1060 
-> 1061             output_grads = [access_grad_cache(var) for var in node.outputs]
   1062 
   1063             # list of bools indicating if each output is connected to the cost

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_grad_cache(var)
   1391                     for idx in node_to_idx[node]:
   1392 
-> 1393                         term = access_term_cache(node)[idx]
   1394 
   1395                         if not isinstance(term, Variable):

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/gradient.py in access_term_cache(node)
   1218                             )
   1219 
-> 1220                 input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)
   1221 
   1222                 if input_grads is None:

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/tensor/elemwise.py in L_op(self, inp, out, grads)
   2025                 i += 1
   2026         ds_op = DimShuffle(gz.type.broadcastable, new_dims)
-> 2027         gx = Elemwise(scalar.second)(x, ds_op(gz))
   2028         return [gx]
   2029 

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/graph/op.py in __call__(self, *inputs, **kwargs)
    251 
    252         if config.compute_test_value != "off":
--> 253             compute_test_value(node)
    254 
    255         if self.default_output is not None:

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/graph/op.py in compute_test_value(node)
    124 
    125     # Create a thunk that performs the computation
--> 126     thunk = node.op.make_thunk(node, storage_map, compute_map, no_recycling=[])
    127     thunk.inputs = [storage_map[v] for v in node.inputs]
    128     thunk.outputs = [storage_map[v] for v in node.outputs]

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/graph/op.py in make_thunk(self, node, storage_map, compute_map, no_recycling, impl)
    632             )
    633             try:
--> 634                 return self.make_c_thunk(node, storage_map, compute_map, no_recycling)
    635             except (NotImplementedError, MethodNotDefined):
    636                 # We requested the c code, so don't catch the error.

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/graph/op.py in make_c_thunk(self, node, storage_map, compute_map, no_recycling)
    598                 print(f"Disabling C code for {self} due to unsupported float16")
    599                 raise NotImplementedError("float16")
--> 600         outputs = cl.make_thunk(
    601             input_storage=node_input_storage, output_storage=node_output_storage
    602         )

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/link/c/basic.py in make_thunk(self, input_storage, output_storage, storage_map)
   1201         """
   1202         init_tasks, tasks = self.get_init_tasks()
-> 1203         cthunk, module, in_storage, out_storage, error_storage = self.__compile__(
   1204             input_storage, output_storage, storage_map
   1205         )

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/link/c/basic.py in __compile__(self, input_storage, output_storage, storage_map)
   1136         input_storage = tuple(input_storage)
   1137         output_storage = tuple(output_storage)
-> 1138         thunk, module = self.cthunk_factory(
   1139             error_storage,
   1140             input_storage,

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/link/c/basic.py in cthunk_factory(self, error_storage, in_storage, out_storage, storage_map)
   1632             for node in self.node_order:
   1633                 node.op.prepare_node(node, storage_map, None, "c")
-> 1634             module = get_module_cache().module_from_key(key=key, lnk=self)
   1635 
   1636         vars = self.inputs + self.outputs + self.orphans

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/link/c/cmodule.py in module_from_key(self, key, lnk)
   1189             try:
   1190                 location = dlimport_workdir(self.dirname)
-> 1191                 module = lnk.compile_cmodule(location)
   1192                 name = module.__file__
   1193                 assert name.startswith(location)

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/link/c/basic.py in compile_cmodule(self, location)
   1541             try:
   1542                 _logger.debug(f"LOCATION {location}")
-> 1543                 module = c_compiler.compile_str(
   1544                     module_name=mod.code_hash,
   1545                     src_code=src_code,

~/miniconda3/envs/pyrecent/lib/python3.8/site-packages/theano/link/c/cmodule.py in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols)
   2544             # difficult to read.
   2545             compile_stderr = compile_stderr.replace("\n", ". ")
-> 2546             raise Exception(
   2547                 f"Compilation failed (return status={status}): {compile_stderr}"
   2548             )

Exception: ('Compilation failed (return status=1): /home/daniel/.theano/compiledir_Linux-4.15--generic-x86_64-with-glibc2.10-x86_64-3.8.5-64/tmpkhf7eft_/mod.cpp: In member function ‘int {anonymous}::__struct_compiled_op_m801e114f7d97ea4676e2828bffc999a52eff3fcb82214f5ecb75d655f61a6eb1::run()’:. /home/daniel/.theano/compiledir_Linux-4.15--generic-x86_64-with-glibc2.10-x86_64-3.8.5-64/tmpkhf7eft_/mod.cpp:459:60: error: expected primary-expression before ‘{’ token.                              #pragma omp parallel for if(n>={int(config.openmp_elemwise_minsize)}).                                                             ^. At global scope:. cc1plus: warning: unrecognized command line option ‘-Wno-c++11-narrowing’. ', 'FunctionGraph(Elemwise{second}(<TensorType(float64, vector)>, <TensorType(float64, (True,))>))')

Any idea what that command line option is referring to?

Thanks!

I will try to take a look whenever I have some time. It could be some mistake/typo in the definition of the functions that shows at compilation time. Have you tried manually calling the functions? There are some examples on how to do so at the end of the notebook

Hi again,

This current environment seems to throw errors any time theano is used, but in my old environment (before installing the dev pymc3), I am able to call the LogLikeWithGrad directly without errors as long as I never try to make a DensityDist, using a snippet akin to the one from the notebook

test_gradded_op = LogLikeWithGrad(my_loglike, data, x, sigma)
test_gradded_op_grad = tt.grad(test_gradded_op(var), var)
test_gradded_op_grad_func = theano.function([var], test_gradded_op_grad)
grad_vals_2 = test_gradded_op_grad_func([mtrue, ctrue])