Unconsistent logp, Evaluating the results of find_MAP on Custom Likelihood

I am having a hard time interpreting the results of find_MAP().
These should maximize the value of the log likelihood (In my case, it is custom)

So I plugged the results in the function, I am getting a different logp value. What does that mean?

from pymc3 import Model
from pymc3 import Metropolis 
from pymc3 import Multinomial, Dirichlet
from pymc3 import sample, find_MAP
from pymc3 import traceplot
from scipy import optimize
import numpy as np
import theano.tensor as T
import pymc3 as pm

y = np.random.randn(100)

def g0(beta, theta):
    def g1(value):
        scaled = (value - theta) / theta
        logp = -scaled - T.exp(-scaled) - T.log(theta)
        return logp
    return g1

with pm.Model() as model:    
    loc = pm.Normal('loc')
    scale = pm.Uniform('scale', 1., 10.)
    pm.DensityDist('gumbel', g0(loc, scale), observed=y)
mm = find_MAP(model = model)

logp = -4.3857, ||grad|| = 4.2435e-05: 100%|██████████| 25/25 [00:00<00:00, 3058.14it/s]

{‘loc’: array(0.55714553),
‘scale_interval__’: array(-14.6132839),
‘scale’: array(1.00000405)}

Now, calling my likelihood function with the output parameters, this gives me a different logp:

g0(mm['loc'], mm['scale'])(y).eval()

This evaluation is correct. However, I don’t understand why find_MAP() computes logp = -4.3857

Because find_MAP maximized the model logp, in you case you forgot about to also add the prior logp :slight_smile:

1 Like

Yes. I figured this. A bit too late though :slight_smile:
On a side note, I got distracted by the fact that the logp was not deterministic, I am getting different results at each run. Going through the source code, find_MAP seems to use scipy’s optimizer.

Yes, and sometimes it can fail to find the global maximum.