Memory mapping `arviz.InferenceData`

Equivalent to this 2nd one, but instead of a single array with this shape you have multiple arrays. In your case,

you have alpha, beta, omega, sigma, tau with shape chain, draw, epsilon_alpha_g, alpha_g, epsilon_beta_g, beta_b with shape chain, draw, group and mu with chain, draw, n_observations. Thus, your posterior is equivalent to an array of floats with shape chain, draw, 5 + n_group * 4 + n_observations. The log prior will not have the deterministics, but for the variables it has, they’ll be the same shape as in the posterior. Therefore in your case (unless dropping all deterministics) it wouldn’t quite double the memory needed, it would be equivalent to an array of shape chain, draw, 5 + 2 * n_groups.

You only have a single observed variable y_obs, so your log_likelihood and posterior_predictive will be a single array with shape chain, draw, n_observations and you’d be in the case where the posterior needs more memory than the log_likelihood group. Once you stop storing mu this is inverted though.

1 Like