Find_MAP when optimizing a Potential

I mean you can get the gradient of the model joint logp wrt to any continuous parameters. Is that what you meant?

Using PyMC V5 (V3 is pretty old already, you should be able to get the same things but the API has changed)

import pymc as pm
import numpy as np

N = 100 
observed_sum = N * 0.5

with pm.Model() as model:
    # Define the prior for probability p
    p = pm.Uniform('p', lower=0, upper=1, transform=None)

    # Generate the Bernoulli samples
    samples = pm.Bernoulli('samples', p=p, shape=N)

    # Calculate the sum of the samples
    samples_sum = pm.Deterministic('samples_sum', pm.math.sum(samples))

    # Define the potential to penalize the difference from the observed sum
    potential = pm.Potential('potential', -(samples_sum-observed_sum) ** 2)

p_dlogp = model.compile_dlogp(vars=[p])
p_dlogp({"p": 0.6, "samples": np.repeat([0, 1], 50)})  # array([-41.66666667])

I disabled the automatic transform of p with transform=None. Otherwise you would have p_interval__ as an input instead (which is fine and what you probably want in an optimizer, but less clear example).