Behrens' Bayesian Learner Model : How to share parameters between steps?

It looks like the -inf appears at the sampling stage. When I try to print the bayesian_lerner_model.check_test_point, all of the r_logodds__ share a same -1.87 logp.

But when I start the sampling, it will show a SamplingError like this:

Initial evaluation results:
k                  -0.92
v                -212.25
r0_logodds__       -1.44
r1_logodds__       -1.61
r2_logodds__       -1.60
                   ...  
r104_logodds__     -1.49
r105_logodds__     -1.67
r106_logodds__      -inf
r107_logodds__     -1.35
y                 -77.09
Name: Log-probability of test_point, Length: 111, dtype: float64

I don’t know why there are differents between the check_test_point and the sampling starting point.