I am working on the Conditional Autoregressive Range (CARR) model, defined as follows:
\begin{aligned} & R_t=\lambda_t \varepsilon_t \\ & \lambda_t=\omega+\sum_{i=1}^q \alpha_i R_{t-i}+\sum_{j=1}^p \beta_j \lambda_{t-j} \\ & \varepsilon_t \sim \operatorname{iid} f(.), \end{aligned}
This model is similar to the GARCH model, but the error term follows a more general Weibull distribution instead of a normal distribution since R_t is always positive. The log likelihood for the Weibull distribution is:
\begin{aligned} L\left(\alpha_i, \beta_j, \theta ; R_1, R_2, \ldots R_T,\right)= & \sum_{t=1}^T \ln \left(\frac{\theta}{R_t}\right) \\ & +\theta \ln \left(\frac{\Gamma(1+1 / \theta) R_t}{\lambda_t}\right)-\left(\frac{\Gamma(1+1 / \theta) R_t}{\lambda_t}\right)^\theta . \end{aligned}.
In this log likelihood function, I need to calculate \Gamma(1+1/\theta). I implemented the model in PyMC3, as shown below:
with pm.Model() as model:
# Priors for the parameters
alpha = pm.Beta("alpha", alpha=1, beta=10, shape=q, testval=0.1)
beta = pm.Beta("beta", alpha=1, beta=10, shape=p, testval=0.1)
gamma = pm.Gamma('gamma', alpha=2, beta=0.5, shape=l)
omega = pm.HalfCauchy("omega", beta=1.0, testval=0.2)
theta = pm.Gamma('theta', alpha=2, beta=0.5, testval=1.0)
g_theta = tt.exp(gammaln(1 + 1 / theta)) #pm.Deterministic('theta1', )
pm.Potential('constraint', tt.switch(tt.sum(alpha + beta) < 1, 0, -np.inf))
def logp(R, M):
lambda_t = tt.zeros(R.shape[0])
for t in range(max(p, q, l), T):
lambda_t = tt.set_subtensor(lambda_t[t], omega + tt.sum(alpha * R[t - q:t][::-1]) + tt.sum(
beta * lambda_t[t - p:t][::-1]) + tt.sum(gamma * M[t - l:t][::-1]))
R = R[max(p, q, l):]
lambda_t = lambda_t[max(p, q, l):]
# # g_theta = tt.gamma(1 + 1 / theta)
g_theta = tt.exp(gammaln(1 + 1 / theta))
log_likelihood = tt.sum(pm.math.log(theta/R) + theta * pm.math.log(g_theta * R / lambda_t) - (g_theta * R / lambda_t)**theta)
return log_likelihood
weibull_likelihood = pm.Potential("weibull_likelihood", logp(R, M))
# weibull_likelihood = pm.DensityDist('weibull_likelihood', logp=logp, observed={'R': R, 'M': M})
However, the errors are below:
Auto-assigning NUTS sampler...
Initializing NUTS using advi...
|----------------------------------------| 0.00% [0/200000 00:00<?]Traceback (most recent call last):
File "/home/.pycharm_helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
File "/home/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/projects/Gekko/src/g_carr_weibull.py", line 113, in <module>
trace = pm.sample(draws=2000, tune=1000, target_accept=0.9, max_treedepth=20, return_inferencedata=True, init='advi')
File "/home/.local/lib/python3.9/site-packages/deprecat/classic.py", line 215, in wrapper_function
return wrapped_(*args_, **kwargs_)
File "/home/.local/lib/python3.9/site-packages/pymc3/sampling.py", line 512, in sample
start_, step = init_nuts(
File "/home/.local/lib/python3.9/site-packages/pymc3/sampling.py", line 2158, in init_nuts
approx = pm.fit(
File "/home/.local/lib/python3.9/site-packages/pymc3/variational/inference.py", line 832, in fit
return inference.fit(n, **kwargs)
File "/home/.local/lib/python3.9/site-packages/pymc3/variational/inference.py", line 150, in fit
state = self._iterate_with_loss(0, n, step_func, progress, callbacks)
File "/home/.local/lib/python3.9/site-packages/pymc3/variational/inference.py", line 238, in _iterate_with_loss
raise FloatingPointError("\n".join(errmsg))
FloatingPointError: NaN occurred in optimization.
The current approximation of RV `beta_logodds__`.ravel()[0] is NaN.
The current approximation of RV `alpha_logodds__`.ravel()[0] is NaN.
The current approximation of RV `gamma_log__`.ravel()[0] is NaN.
The current approximation of RV `gamma_log__`.ravel()[1] is NaN.
The current approximation of RV `omega_log__`.ravel()[0] is NaN.
The current approximation of RV `theta_log__`.ravel()[0] is NaN.
Try tracking this parameter: http://docs.pymc.io/notebooks/variational_api_quickstart.html#Tracking-parameters
I found that if I set \alpha_1 to 1, everything works fine. However, when \alpha_1 is a distribution, the errors mentioned above appear. Can anyone help me identify the issue?