My model has an RV skill that has a beta distribution as its prior:
skill = pm.Beta('skill', 2.0, 5.0)
The trace collects values for both skill and for its transformation skill_logodds__.
Unfortunately my model exhibits some divergences. In investigating the divergences, I examine the chain warnings, which collects only the logodds skill_logodds__. Fortunately I can transform values of logodds back to the domain of skill with scipy’s expit.
So far so good.
My model also has an RV winnable that has a truncated normal as its prior:
winnable = pm.TruncatedNormal('winnable', mu=winnable_mu, sigma=winnable_sigma, lower=0, upper=opportunities)
The trace collects values for both winnable and for its transformation winnable_interval__. But as with skill, chain warnings only collects winnable_interval__.
How can I transform a value from the interval (as collected) back to the domain of the original RV?