Okay, I implemented this with a nonzero mean, but I would love to know if I can write the scan function so it doesn’t return the full matrix (necessitating taking the diag):
I compute simulated data as follows:
import numpy as np
N = 25
omega = 0.5
alpha_1 = 0.8
beta_1 = 0.15
mu = 1.5
sig = [1.0]
y = [np.random.normal(mu,1.0)]
for i in range(1,N):
sig.append(
np.sqrt(
omega + alpha_1*np.square(y[-1]) +beta_1*np.square(sig[-1])
)
)
y.append(np.random.normal(mu, sig[-1]))
sig = np.array(sig)
y = np.array(y)
which computes data y from a normal distribution with mean mu and heteroscedastic variance sig.
The model I use (with scan) to recover the parameters is as follows:
with pm.Model() as garch_nzm:
# GARCH std
mu = pm.Normal('mu',1.0,0.1)
alpha0 = pm.Uniform('alpha0',0,1);
alpha1 = pm.Uniform('alpha1',0,1);
beta1 = pm.Uniform('beta1',0,(1-alpha1))
obs_mean = y
def step(PREV_Y, PREV_SIGMA, OMEGA, ALPHA1, BETA1):
NEW_SIGMA = at.sqrt(
OMEGA + ALPHA1* at.square(PREV_Y) + BETA1 * at.square(PREV_SIGMA)
)
return NEW_SIGMA
garch_all, updates = pytensor.scan(
fn=step,
sequences=[obs_mean],
outputs_info=[at.ones_like(obs_mean,dtype=np.float64)],
non_sequences=[alpha0, alpha1, beta1],
strict=True,
)
garch_std = pm.Deterministic("garch_std", at.diag(garch_all))
lik = pm.Normal(
"kr_lik", mu=mu*at.ones_like(y), sigma=garch_std, observed=y
)
idata = pm.sample()
which doesn’t really give values that close to the original values for the parameters. It does seem to recover the data though…
I worry about the memory in the scan function though. Any suggestions? @ricardoV94 ?
