Inference from logp values



I was wondering if anyone could clarify me about the significance of logp values obtained for model test points.

for RV in dir_model.basic_RVs:
print(, RV.logp(dir_model.test_point))

The models test points when I print I find all 0 arrays. Log p values are negative and for a few variables are high and other are very low. What conclusions can I make using them?



The logp of the test_point does not have much significant meaning, it is more to check whether there is any return logp being non-finite (inf, NaN). You dont make use of them directly, as they are not always normalized so the value could not be interpret in isolation (but they are interpretable when you are evaluating at multiple points in the parameter space - that’s how optimizer and sampler works).