Gradient for "black box" likelihood function

Hello everyone, I have been looking at the “black box” likelihood example, here.

https://docs.pymc.io/projects/examples/en/latest/case_studies/blackbox_external_likelihood_numpy.html

I have a question about the normal_gradients function, shown below.

def normal_gradients(theta, x, data, sigma):
    """
    Calculate the partial derivatives of a function at a set of values. The
    derivatives are calculated using the central difference, using an iterative
    method to check that the values converge as step size decreases.

    Parameters
    ----------
    theta: array_like
        A set of values, that are passed to a function, at which to calculate
        the gradient of that function
    x, data, sigma:
        Observed variables as we have been using so far


    Returns
    -------
    grads: array_like
        An array of gradients for each non-fixed value.
    """

    grads = np.empty(2)
    aux_vect = data - my_model(theta, x)  # /(2*sigma**2)
    grads[0] = np.sum(aux_vect * x)
    grads[1] = np.sum(aux_vect)

    return grads

I don’t understand how this function works to take the gradient of the log-likelihood. Also, I believe that the comment about this function being an iterative method is no longer applicable. I believe that an iterative method was employed to calculate the gradient in an older version of this example. Could someone please explain how the normal_gradients function works? Thank you!

2 Likes

@OriolAbril Moved question from github to here :slight_smile:

I forgot to update this explanation from the original notebook (the cython one). Here we don’t “take” the gradient of the log likelihood but instead write a function (independent of the log likelihood one) that computes its gradient from its analytical expression (that is often not available but it is available and quite short here)

1 Like