Hi all,
When I tried to debug my program, I arrived to some kind of misunderstanding. I definitely see that random nodes are changing, but pm.Deterministic
remains the same.
For example, the following code:
with pm.Model() as model:
D = tt.as_tensor_variable(5, dtype="int64")
μ = pm.HalfCauchy('μ', 5.0, initval = 1.0)
σ = pm.HalfCauchy('σ', 6.0, initval = 2.0)
### Lognormal distribution
σlog = tt.sqrt(tt.log((σ / μ)**2 + 1));
μlog = tt.log(μ) - σlog**2 / 2;
comp = pm.Lognormal.dist(mu = μlog, sigma = σlog)
p, _ = ae.scan(lambda t: tt.exp(pm.logcdf(comp, t + 0.5)), sequences = tt.arange(0, D + 1))
pm.Deterministic('p', p)
trace = pm.sample(3, pm.Metropolis(), tune=0, chains=1, progressbar=False, init='advi')
produces the output:
display(trace.posterior['μ'][0].values)
display(trace.posterior['σ'][0].values)
trace.posterior['p'][0].values
array([1.26028111, 0.42585018, 0.53549947])
array([2. , 2.77220024, 2.4475724 ])
array([[0.2183644 , 0.51650629, 0.66358024, 0.7495661 , 0.80524404,
0.84378139],
[0.2183644 , 0.51650629, 0.66358024, 0.7495661 , 0.80524404,
0.84378139],
[0.2183644 , 0.51650629, 0.66358024, 0.7495661 , 0.80524404,
0.84378139]])
as you can see μ and σ are changing, but p remains the same. In my opinion, p should be updated also, because the distribution is also changing.
The second minor observation, the initval remains valid only when I run my code the first time, but then their values are neglected. I wonder if it is the intended behavior.
Thank you for your help and any advice would be very welcome (as I have already spent quite a lot of time debugging)
-Andrei