P_waic of az.waic does not change proportionally with number of trials

I have a simple inference example for multiple coinflips for multiple agents. For reproducibility I’m using p=11% for the probability of heads for all agents:

import pymc as pm
import arviz as az
import numpy as np

num_trials = 500
num_agents = 60
true_p_heads = np.array([0.11]*num_agents)

observed_heads = np.random.binomial(n=num_trials, p=true_p_heads)  # Simulated observed heads

with pm.Model() as hierarchical_model:
    # Hyperpriors for the group parameters
    alpha = pm.Gamma('alpha', alpha=1, beta=1)
    beta = pm.Gamma('beta', alpha=1, beta=1)
    
    # Individual participant probabilities of heads
    p_heads = pm.Beta('p_heads', alpha=alpha, beta=beta, shape=num_agents)
    
    # Likelihood of observed data
    observations = pm.Binomial('observations', n=num_trials, p=p_heads, observed=observed_heads, shape=num_agents)
    
    # Posterior sampling
    trace = pm.sample(chains = 4, draws = 2_000, tune = 5_00)
    pm.compute_log_likelihood(trace)
    
az.summary(trace)
az.waic(trace)

No matter how often I repeat the inference, for num_trials=50, p_waic is around 20, for num_trials=500, it is around 27, and for num_trials=5000, it’s around 30. This is weird, since according to the book Bayesian Data Analysis by Gelman, et al., pwaic is the result of a sum over all trials (see attached equation), and should therefore be (more or less) proportional to the number of trials, if I understand correctly. I do get the warning “There has been a warning during the calculation. Please check the results.”, but since it’s not very specific I have no idea what it’s trying to tell me, or if that’s, in fact, the reason for the discrepancy.

Am I missing something?

I’m using
pymc 5.10.0
arviz 0.16.1

Thank you.

Maybe @OriolAbril can help here

A sum over n terms doesn’t need to be at all proportional to n. Take for example

\sum_{i=1}^{n} \frac{1}{n}

p_waic is in fact similar to this, for simple models it can be interpreted as the number of parameters, so it is also basically constant with n.

In more complex models like hierarchical ones (which is your case) the interpretation can be extended to effective number of parameters. Thus, it will be something smaller than the number of parameters in the model and will depend on both the model itself and the data. In your case, the number of parameters is num_agents + 2, but values of p_heads are not independent of one another, so they don’t “count” as num_agents parameters but less.

How much less can be quantified with this p_waic quantity, but it is generally hard to visuallize. An example that might help is if you had:

mu = pm.Normal()
sigma = pm.HalfNormal()
theta = pm.Normal("theta", mu, sigma, dims="group") 

And for whatever reason the posterior converged to sigma~0, then theta wouldn’t really be a parameter anymore but a constant, instead of contributing + number of groups as would happen on an unpooled model

3 Likes