I have the following model
with pm.Model() as model: lam = pm.Normal(name="lam", mu=1, sigma=1) alpha = pm.Beta(name="alpha", alpha=1.5, beta=1.5) beta = pm.Beta(name="beta", alpha=1.5, beta=1.5) # Adjust sho depending on sign with either alpha or lambda and beta sho_w = tt.switch(tt.ge(sho, 0), sho ** alpha, (-1 * lam) * ((-1 * sho) ** beta)) # Standardize sho_w_s = (sho_w - sho_w.mean()) / sho_w.std() # State specific B = pm.Normal(name="B", mu=0, sigma=0.5, shape=(n_states, n_sho_vars)) # Sho B_s = B[state, np.arange(0, n_sho_vars)] # Model error eps = pm.InverseGamma(name="eps", alpha=9.0, beta=4.0) # Model mean y_hat = pm.Deterministic(name="y_hat", var=pm.math.sum(sho_w_s * B_s, axis=1) ) # Model likelihood y_like = pm.Normal(name="y_like", mu=y_hat, sd=eps, observed=y)
I am particularly interested in the values of lam, alpha and beta.
tt.switch to elementwise multiply the 2500x4 variable
sho depending on whether its value is positive or negative with
lam and raise it to the power of either
However, the results for the “else” part in
tt.switch (i.e. for the case that sho is negative) yield very strange results for
beta (lam is roughly zero and the posterior for beta is roughly equal to its prior), so it seems for me that the algorithm fully ignores the “else” part.
The negative and positive values in the
sho array are roughly equally distributed.
Moreover, I am not sure whether the standardization statement standardizes the array column-wise.
I use PyMC 3.11.4
Thanks in advance