Hierarchical mixture model with point-mass at zero

Hello everyone,

I am working on a meta-analysis of treatment effects that is similar in structure to the eight schools problem.

After my previous attempts, it seems to me that the distribution that would fit best my data is a mixture with two components, one point-mass at or around zero and one normal distribution centered at or around zero. It is equivalent to equation (11) here.

To make things more concrete, you can find below a toy example where :
-each school estimated treatment effect is vector y
-each school standard error of the effect estimate is vector sigma
-each school treatment effect is drawn from a mixture with two components with equal weight, one with a point-mass at zero and the other with a normal n(0,1)

import numpy as np
import pymc as pm
import arviz as az

sigma=np.full((n_draw,), 1)
mu_draws = pm.draw(pm.NormalMixture.dist(w=[0.50,0.50],mu=[0,0],sigma=[1e-10,1]),draws=n_draw)
y=np.array([np.random.normal(loc=mu_draws[i],scale=sigma[i]) for i in np.arange(0, n_draw)])

with pm.Model() as model:
    σ = pm.HalfNormal("σ", sigma=np.array([1e-10,1]),shape=2)
    w = pm.Dirichlet('w', a=np.ones(2))
    θ = pm.NormalMixture("θ",
    obs = pm.Normal("obs", mu=θ, sigma=sigma, observed=y)
    prior = pm.sample_prior_predictive(samples=10000)
    trace = pm.sample(tune=1000)
    posterior_predictive = pm.sample_posterior_predictive(trace)

On my toy example, i have all the usual suspects : rhat issues, “initial evaluation of model at starting point failed!” if i add transform=pm.distributions.transforms.ordered, and -in my real example- 0 probability mass allocated to the point-zero mixture element.

Thus it seems to me that there is something very high level that i get totally wrong in the model formulation and/or sampling.

Any help and feedback would thus be really appreciated :slight_smile:

Thanks a lot and have wonderful day

1 Like

You might find this thread of some use.


Thanks, that’s very helpful.