So a quick follow-up on that, as expected it returns an asymmetric Student-T, which is good, but it’s not normalized to the peak:
import pymc3 as pm
import theano.tensor as tt
modeled = 0
observed = -0.001
sigma = tt.switch(tt.gt(observed, modeled), 1., 20.)
print(pm.StudentT.dist(2, mu=modeled, sigma=sigma).logp(observed).eval())
observed = 0.001
sigma = tt.switch(tt.gt(observed, modeled), 1., 20.)
print(pm.StudentT.dist(2, mu=modeled, sigma=sigma).logp(observed).eval())
Returns:
-4.0354533
-1.0397215
I would assume the distribution has to be continuous, but maybe pymc3 does something under the hood I don’t understand.