Dear experts,

I am performing some tests with a fairly basic model, where I define the likelihood of a 1-bin distribution and I use a uniform prior. In addition, I introduce a Gaussian nuisance parameter in the likelihood.

This is implemented as follow:

import pymc3 as pm

import numpy as npmodel = pm.Model()

with model:

truth = pm.Uniform(‘truth’, lower=0., upper=300.)

gaus = pm.Normal(‘gaus_syst1’, mu=0., sigma=1.0)

pois = pm.Poisson(‘poisson’, mu=truth*(1+0.1*gaus), observed=100)

trace = pm.sample(10000, tune=1000, nuts_kwargs={‘target_accept’:0.95})print(‘NP mean = {}’.format(np.mean(trace[‘gaus_syst1’])))

print(‘NP rms = {}’.format(np.std(trace[‘gaus_syst1’])))t1’]))print(‘truth mean = {}’.format(np.mean(trace[‘truth’])))

I get the following results:

NP mean = -0.10184846101085772

NP rms = 1.0066925641532645truth mean = 103.01576766052715

My expectation is that the maximum of the posterior probability should be at (truth, gaus) = (100, 0), since this should also correspond to the maximum likelihood.

So I can’t understand why the posterior distribution of the gaussian nuisance parameter (mean = -0.1) is shifted with respect to the prior one (mean = 0).

Am I missing something obvious?