Hello,

I’m trying way to express prior on the ‘gain’ of an A/B test.

This is a “basic setting” , without prior :

visitors = [10,10]

success = [5,5]with pm.Model() as model:

a = pm.Beta(‘a’,alpha=1,beta=1)

b = pm.Beta(‘b’,alpha=1,beta=1)

obs_a = pm.Binomial(‘obs_a’,n=visitors[0],p=a,observed=success[0])

obs_b = pm.Binomial(‘obs_b’,n=visitors[1],p=b,observed=success[1])`g = pm.Deterministic('g',(b-a)/a) trace = pm.sample(tune=2000)`

a & b are the clic rates, and g is the relative gain, everything work as expected.

Now, I’m adding a prior to my model. Note : the prior is expressed on g (the gain), not on the clic rates.

I’m doing it with a potential and a normal pdf (and with the same data).

with pm.Model() as model:

a = pm.Beta(‘a’,alpha=1,beta=1)

b = pm.Beta(‘b’,alpha=1,beta=1)

obs_a = pm.Binomial(‘obs_a’,n=visitors[0],p=a,observed=success[0])

obs_b = pm.Binomial(‘obs_b’,n=visitors[1],p=b,observed=success[1])`g = pm.Deterministic('g',(b-a)/a) prior = pm.Normal.dist(mu=0,sd=10) pot = pm.Potential('p',prior.logp(g)) trace = pm.sample(tune=2000)`

Everything works ok :

I’m getting more or less the same posterior, since my prior was very weak, so this result is ok.

Now I will use a stronger prior with a smaller sd value, expecting to obtain a narrower posterior for g.

with pm.Model() as model:

a = pm.Beta(‘a’,alpha=1,beta=1)

b = pm.Beta(‘b’,alpha=1,beta=1)

obs_a = pm.Binomial(‘obs_a’,n=visitors[0],p=a,observed=success[0])

obs_b = pm.Binomial(‘obs_b’,n=visitors[1],p=b,observed=success[1])`g = pm.Deterministic('g',(b-a)/a) prior = pm.Normal.dist(mu=0, sd=0.1) # stronger prior pot = pm.Potential('p',prior.logp(g)) trace = pm.sample(tune=2000)`

This leads to some diagnostic problem :

Sampling 4 chains for 2_000 tune and 1_000 draw iterations (8_000 + 4_000 draws total) took 4 seconds. There was 1 divergence after tuning. Increase

`target_accept`

or reparameterize. There was 1 divergence after tuning. Increase`target_accept`

or reparameterize. The acceptance probability does not match the target. It is 0.7215948572359366, but should be close to 0.8. Try to increase the number of tuning steps. There were 3 divergences after tuning. Increase`target_accept`

or reparameterize. The acceptance probability does not match the target. It is 0.7091417042624631, but should be close to 0.8. Try to increase the number of tuning steps. The number of effective samples is smaller than 25% for some parameters.

I don’t understand what the problem is, nor it’s origin.

The posterior of g is narrower as expected, but also a little less nice than the previous ones, so I guess that there is a problem, but I don’t have any clues on how to solve it.

I’ve tried with more tuning steps (as suggested by the diagnostic message), but I still have the warning : “The number of effective samples is smaller than 25% for some parameters.”, and the posterior doesn’t look nicer… So I guess the problem is deeper than that…

any help will be appreciated.

regards,

Hubert.