I tried to come up with a solution myself. Opinions, suggestions, and, more importantly, critics are welcome.
Assume that the function f(x_i, \bar{\theta}) is just a linear model, such as
f(x, \bar{\theta}) = f(x, \theta_0, \theta_1) = \theta_0\cdot x + \theta_1
Here is the code
import numpy as np
from scipy import special
import pymc as pm
import aesara.tensor as tt
print(f"Running on PyMC v{pm.__version__}")
# Generate some data (not real data just to test the code)
dof = np.random.randint(low=0, high=20, size=10)
x = np.linspace(0, 10, 10)
slope = 2
intercept = 8
y = x*slope + intercept + np.random.rand(10)
# Chi-square log-likelihood
def log_likelihood(theta0, theta1, x, y, dof):
r = y / (theta0 * x + theta1)
return -r / 2 + (dof-2)/2 * tt.log(r) - dof / 2 * np.log(2) - special.loggamma(dof/2)
with pm.Model() as model:
# Define some priors (I chose the exponential distribution arbitrarily)
theta0 = pm.Exponential('theta0', lam=1)
theta1 = pm.Exponential('theta1', lam=1)
# Potential requires a log likelihood
like = pm.Potential('like', log_likelihood(theta0, theta1, x, y, dof))
idata = pm.sample(2000, tune=2000, return_inferencedata=True)
The data are not real. My aim is not to find the true parameters in this case, just to test the code.
I wrote myself the log-likelihood of the chi-squared distribution and used Potential
to sample it.
Is it correct that I need to put the log-likelihood in Potential
? The documentation does not tell much.
As far as I understood, Potential
should behave as DensityDist
, but with a different syntax.