In the doc above, the second part is a Poisson mixture and the last cell is about plotting the Mixture components each with a Poisson distribution. But you are right: the result is plot by forward simulation using numpy as we dont have an easy function to reconstruct mixture components yet. Contributions are always welcome if you think the docs is unclear or there is feature missing.
For now, the workaround to plot each component is a bit involved. Say you have a model like below:
sample = np.hstack([np.random.randn(100),np.random.rand(100)])
with pm.Model() as m:
mu = pm.Normal('mu')
sd = pm.HalfNormal('sd', 1)
condition = pm.Normal.dist(mu, sd)
rn = pm.Uniform.dist(-5., 5.)
sampling_dist = pm.Mixture(
"Sampling distribution ",
w=[0.5, 0.5],
comp_dists=[condition, rn],
observed=sample)
Now the logp of sampling_dist
is not really what you want, as it is conditioned on the observation - it takes the free parameters mu
and sd
as input and output the sum of logp of the observations conditioned on the input.
So you need the raw logp function to evaluate on a grid of points:
x = np.linspace(-10, 10, 100)
py_ = tt.sum(
sampling_dist.distribution.w * tt.exp(
sampling_dist.distribution._comp_logp(x)),
axis=1)
However, you cannot display py_ directly in most case, as it is a tensor; and evaluating it using .eval()
will gives you an error, as you need to provide the input (in this case mu and sd). So you need to compile it into a theano function:
f = theano.function(m.free_RVs, py_)
to check what input it accepts, do:
m.free_RVs
# [mu, sd_log__]
So to evaluate the function, you provide the required input as a list:
py = f(0., np.log(1.)) # mu=0 and sd=1
plt.plot(x, py);
Hopefully this clear it up a bit for you.