Hi all,
I’m trying to build a model that uses two GaussianRandomWalk
distributions:
- One to model a time-varying
log_sigma
- Another to model a latent state (
predictors_noise
) where the innovation scale (sigma
) is defined assoftplus(log_sigma)
However, when I try to use the output of one GaussianRandomWalk
as the sigma
for another, I get shape errors, even though the dimensions seem to align as expected. Below is a minimal example that reproduces the issue:
import pymc as pm
import pytensor.tensor as pt
import numpy as np
T = 100
N = 2
predictors_noise_0 = np.zeros(N)
initval_array = np.zeros((T, N))
initval_array[0, :] = predictors_noise_0
with pm.Model() as model:
log_sigma = pm.GaussianRandomWalk("log_sigma", mu=0.0, sigma=0.1, shape=T)
sigma = pt.softplus(log_sigma) # Ensures positivity
sigma_matrix = pt.repeat(sigma[:, None], N, axis=1)
predictors_noise = pm.GaussianRandomWalk("predictors_noise",
mu=0.0,sigma=sigma_matrix,shape=(T, N))
idata = pm.sample(draws=100, tune=100, chains=2, cores=2,
initvals={"predictors_noise": initval_array})
When log_sigma.shape == T
:
ValueError: Alloc static input type and target shape are incompatible: Matrix(float32, shape=(100, 2)) vs (1, 100)
When log_sigma.shape == T - 1
:
ValueError: Alloc static input type and target shape are incompatible: Matrix(float32, shape=(99, 2)) vs (1, 100)
Am I doing something wrong here? Is there a workaround or recommended way to achieve this kind of structure, where sigma_t
is itself a random walk?
Thank you