Hello everyone, I have been looking at the “black box” likelihood example, here.
I have a question about the normal_gradients function, shown below.
def normal_gradients(theta, x, data, sigma): """ Calculate the partial derivatives of a function at a set of values. The derivatives are calculated using the central difference, using an iterative method to check that the values converge as step size decreases. Parameters ---------- theta: array_like A set of values, that are passed to a function, at which to calculate the gradient of that function x, data, sigma: Observed variables as we have been using so far Returns ------- grads: array_like An array of gradients for each non-fixed value. """ grads = np.empty(2) aux_vect = data - my_model(theta, x) # /(2*sigma**2) grads = np.sum(aux_vect * x) grads = np.sum(aux_vect) return grads
I don’t understand how this function works to take the gradient of the log-likelihood. Also, I believe that the comment about this function being an iterative method is no longer applicable. I believe that an iterative method was employed to calculate the gradient in an older version of this example. Could someone please explain how the normal_gradients function works? Thank you!