Hello !
While trying to implement a specific model, I saw that the Metropolis step seemed to never move.
I sneeked in pymc3 source code to add some prints and saw that the delta_logp was always very small.
I’m working with a black-box likelihood with pm.Potential
and when printing my log-likelihoods, I shouldn’t have such low delta_logp
, in my understanding…
Here is a quick output exemple :
Sequential sampling (2 chains in 1 job)
Metropolis: [law]
Sampling chain 0, 0 divergences: 0%| | 2/1100 [00:00<03:10, 5.77it/s]
returned log-likelihood: [[array(-29.67120048)]]
returned log-likelihood: [[array(-29.65408013)]]
delta_logp: -6.347108208972685
Accepted ? False
returned log-likelihood: [[array(-29.75667088)]]
returned log-likelihood: [[array(-29.65408013)]]
delta_logp: -9.220555196247574
Accepted ? False
Sampling chain 0, 0 divergences: 0%| | 4/1100 [00:00<02:37, 6.94it/s]
returned log-likelihood: [[array(-29.97665752)]]
returned log-likelihood: [[array(-29.65408013)]]
delta_logp: -10.00953635258783
Accepted ? False
returned log-likelihood: [[array(-29.70574403)]]
returned log-likelihood: [[array(-29.65408013)]]
delta_logp: -8.004581846022162
Accepted ? False
Sampling chain 0, 0 divergences: 1%| | 6/1100 [00:00<02:20, 7.76it/s]
returned log-likelihood: [[array(-29.62658475)]]
returned log-likelihood: [[array(-29.65408013)]]
delta_logp: -9.647356751319833
Accepted ? False
Do these numbers seem fine to you ? If so could you please explain to me how such small log-likelihood differences can create these delta_logp
?
Thanks a lot in advance.