While playing around with this more I found an issue - When adding the default_transform = True param I was receiving the correct std, I was getting back the wrong estimate for p:
with pm.Model() as model:
p = pm.Uniform('p', 0, 1, default_transform=None)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
mle = pm.find_MAP()
H = pm.find_hessian(mle)
std = np.sqrt(np.diag(np.linalg.inv(H)))[0]
>>> mle == 0.5
>>> std == 0.16...
Clearly mle != 0.5, as it should be .6...
It appears to be an issue with the uniform prior. If I re-parameterize this as a beta-binom with priors alpha=1, beta=1 I get the exact same flip-flopping behavior where toggling default_transform toggles whether the MLE estimate or the std estimate is correct.
However, if I don’t chose a uniform prior I do get the right answer with the beta-binom model:
with pm.Model() as model:
p = pm.Beta('p', alpha=.6, beta=.3, default_transform=None )
y = pm.Binomial('y', n = len(data), p=p, observed=data.sum())
mle = pm.find_MAP()
H = pm.find_hessian(mle, vars=[p])
std = np.sqrt(np.diag(np.linalg.inv(H)))
>>> mle = 0.6...
>>> std = .173
Very curious.