I have a model with four intermediate parameters: “mu_intercept”, “mu_slope”, “psi_intercept” and “psi_slope”. Each of these will be modelled using a sum of various factors; so, for example: mu_intercept ~ baseline_mu_intercept + interaction_mu_intercept
, where baseline_mu_intercept
etc all need to be estimated.
I have four intermediate params, each with a baseline so four baselines and each with an interaction factor so four interactions. All eight params have normal priors which I give in two lines and use dimension naming:
baseline = pm.Normal("baseline", mu=np.array([1., 2., 0., 0.]), sigma=0.5, dims="param")
interaction = pm.Normal("interaction", mu=0, sigma=0.5, dims=("factor", "param"))
with { "param": ["mu_intercept", "mu_slope", "psi_intercept", "psi_slope"]}
(and some “factor” dimension)
Now I want to define my actual intermediate parameters:
# "mu_intercept": 0, "mu_slope": 1, "psi_intercept": 2, "psi_slope": 3
mu_intercept = baseline[0] + interaction[:, 0]
mu_slope = baseline[1] + interaction[:, 1]
psi_intercept = baseline[2] + interaction[:, 2]
psi_slope = baseline[3] + interaction[:, 3]
I couldn’t find a way to rely on named dimensions because baseline
and interaction
are PyTensor variables, which don’t seem to support named dimensions (named dimensions, as I understand, come from xarray
)
Is that correct? Is there no way to leverage named dimensions when defining intermediate variables so that something like this works?
mu_intercept = baseline["mu_intercept"] + interaction[:, "mu_intercept"]
If indeed that’s not possible, that seems to put quite a big constraint on the utility of named dimensions.