Dirichlet with random shape


I’m toying with a Normal MixtureModel. The weights are Dirichlet:

import pymc3 as pm
import numpy as np
import theano.tensor as tt
W = np.array([0.35, 0.4, 0.25])
with pm.Model() as model:
 w = pm.Dirichlet('w', np.ones_like(W)) 
 mu = pm.Normal('mu', 0., 10., shape=W.size)
 tau = pm.Gamma('tau', 1., 1., shape=W.size)
 x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)

I am trying to change a bit the code so that instead of fixing the shape of the Dirichlet, the shape is a random uniform variable like this:

k = pm.DiscreteUniform(‘k’,lower=2,upper=5)
c = tt.ones_like(k)
w = pm.Dirichlet(‘w’, a=c)

but it is not working. The error is:

TypeError: Expected int elements in shape

I appreciate if you can help me how to go about this.
Many thanks

We do not allow dynamic shape in model, so currently this is impossible to do.
One potential work around is having a fix shape alpha with a shape is equal to the max shape (5 in this case), and dynamically assign zeros to some element. But again, inferencing these kind of model is also very tricky.

Thanks for the response.
Following your suggestion to dynamically assign zeros I have this attempt:

with pm.Model() as model:
 k = 5
 arr = []
 for j in range(k):
 w = pm.Dirichlet('w', a=tt.stack([arr[i] for i in range(k)]), shape=k)
 du = pm.DiscreteUniform('du',lower=0, upper=1,shape=k)
 mu = pm.Normal('mu', 0., 10., shape=k)
 tau = pm.Gamma('tau', 1., 1., shape=k)
 x_obs = pm.NormalMixture('x_obs', w, np.multiply(mu,du), tau=np.multiply(tau,du), observed=x)

Basically what I’m doing is to multiply by 0 or 1 the elements of w, mu, and tau. the 0's and 1's are those of the DiscreteUniform

It’s not working. The error is (only showing last fragment):

/usr/local/lib/python2.7/dist-packages/pymc3/step_methods/hmc/base_hmc.py in astep(self=<pymc3.step_methods.hmc.nuts.NUTS object>, q0=array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
   0., 0.]))
144                 self.iter_count,
145                 None,
146                 None,
147             )
148             self._warnings.append(warning)
--> 149             raise SamplingError("Bad initial energy")
151         adapt_step = self.tune and self.adapt_step_size
152         step_size = self.step_adapt.current(adapt_step)
153         self.step_size = step_size

SamplingError: Bad initial energy

I think the DiscreteUniform in particular lower=0 is generating the error.

If I change the bounds to something bigger than 0 it doesn’t throw the error but after sampling all du's are 1, defeating the purpose.

How can I implement the idea correctly?

I have something that seemingly answers my previous question:

with pm.Model() as model:
	k = 5
	w = pm.Dirichlet('w', a=tt.stack([0.1,0.1,0.1,0.1,0.1]), shape=k)
	du = pm.DiscreteUniform('du',lower=0, upper=1,shape=k)
	idx_arr = []
	for i in range(k):
		if du[i]:

	mu = pm.Normal('mu', 0., 10., shape=k)
	tau = pm.Gamma('tau', 1., 1., shape=k)
	x_obs = pm.NormalMixture('x_obs', tt.stack([ w[h] for h in idx_arr ]), tt.stack([ mu[h] for h in idx_arr ]), tau=tt.stack([ tau[h] for h in idx_arr ]), observed=x)
with model:	
	trace = pm.sample(3000,n_init=10000, tune=500, random_seed=SEED)[1000:] 

With respect to the first response, does this implements the suggestion of “dynamically assign zeros”?

Did this work for you @arraval - or did you find something else that worked? I’m facing the same issue…