Likelihood evaluation and DensityDist


I defined a custom likelihood function I want to use in some model via DensityDist. This likelihood is defined up to a constant which I think is fine with the way PyMC3 works. However, I have difficulties in understanding the results I get by evaluating the likelihood. Here is the corresponding code :

import numpy as np
import pymc3 as pm
import theano.tensor as tt

N = 10

def custom_logp(t, d):

    def ll_f(x):

        return - (1/d) * tt.sum(tt.pow(x[t: t + d], 2))

    return ll_f

model = pm.Model()

with model:
    t = pm.DiscreteUniform('t', lower=0, upper=N-1)
    d = pm.DiscreteUniform('d', lower=1, upper=N-t)

    X_obs = pm.DensityDist('X_obs', custom_logp(t, d), observed=np.zeros(N))

I thought the following code would give me 4 times the same result. However, I get a first value for the first and second query, and a second one for the third and fourth.

print(model.logp({'t': 1, 'd': 1}))
print(model.logp({'t': 1, 'd': 2}))
print(model.logp({'t': 5, 'd': 1}))
print(model.logp({'t': 5, 'd': 2})) 

I therefore would like to know how exactly is the likelihood evaluation working. I could not find any explanations regarding how the “up to a constant” definition of the logp is related to evaluation at different parameter values.

Thank you in advance for your answers !!

They are not the same because d is depend on t: d = pm.DiscreteUniform('d', lower=1, upper=N-t), which changing t also change the logp of d.

You can have custom logp that are “up to a constant”, but the RVs in PyMC3 retained all the constants (unlike in Stan, which you need to change your way of writing the model from y ~ normal(mu, sigma) to write target += normal_lpdf(y | mu, sigma)).

Thank you for your answer !