Hello, everyone.
I was trying to run some models to perform a kind of reconciliation over multiple levels. The idea is to have a base level (a.k.a, lvl3) which is Poisson distributed with a given mean (lvl3). Later, this will be joined using the S matrix to generate the observed data of the following level. The idea came out of from this paper https://arxiv.org/pdf/2207.09322.pdf
. The implementation follows:
lvl3 = np.array([0,1,1,0,1,1,2,1])
lvl2 = np.array([1,1,2,3])
lvl1 = np.array([4,6])
lvl0 = np.array([10])
S_2 = np.array([[1,1,0,0,0,0,0,0],[0,0,1,1,0,0,0,0],[0,0,0,0,1,1,0,0],[0,0,0,0,0,0,1,1]])
S_1 = np.array([[1,1,0,0],[0,0,1,1]])
S_0 = np.array([1,1])
with pm.Model() as model:
l3 = pm.Poisson('l3', mu=lvl3, shape=lvl3.shape)
l2 = pm.Poisson('l2', mu=lvl2, shape=lvl2.shape, observed=S_2@l3)
l1 = pm.Poisson('l1', mu=lvl1, shape=lvl1.shape, observed=S_1@l2)
Y = pm.Poisson('Y', mu=lvl0[0], observed=S_0@l1)
trace = pm.sample(draws=5000, tune=3000)
>>> trace.posterior.l3.mean(['chain', 'draw']).values
# array([0. , 1.0997 , 1.078 , 0. , 0.9671 , 1.0333 , 2.16785, 1.0635 ])
Even thought I can sample (and the results are fairly reasonable) it takes ages when we scale the base level to one thousand items, for example. So, my first question would be: Is there a way to sample discrete models faster? I noticed that external samplers do not support discrete models.
A second approach that I tried was to use the following model:
lvl3 = np.array([0,1,1,0,1,1,2,1])
lvl2 = np.array([1,1,2,3])
lvl1 = np.array([4,6])
lvl0 = np.array([10])
S_2 = np.array([[1,1,0,0,0,0,0,0],[0,0,1,1,0,0,0,0],[0,0,0,0,1,1,0,0],[0,0,0,0,0,0,1,1]])
S_1 = np.array([[1,1,0,0],[0,0,1,1]])
S_0 = np.array([1,1])
with pm.Model() as model:
l3 = pm.Poisson("l3", mu=lvl3, shape=lvl3.shape)
l2 = pm.Potential("l2", pm.logp(pm.Poisson.dist(mu=lvl2, shape=lvl2.shape), S_2@l3))
l1 = pm.Potential("l1", pm.logp(pm.Poisson.dist(mu=lvl1, shape=lvl1.shape), S_1@l2))
Y = pm.Potential("Y", pm.logp(pm.Poisson.dist(mu=lvl0[0]), S_0@l1))
posterior = pm.sample(draws=5000, tune=3000)
I tried to use an exponential distribution instead of a Poisson distribution on l3 (mainly to check the speed), it became almost immediate using numpyro parallel, however the results were odd. To check if there was some caveat with this implementation using pm.Potential, I tried running only with Poisson and I was unable to run. The error follows:
SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'l3': array([0, 1, 1, 0, 1, 1, 2, 1])}
Logp initial evaluation results:
{'l3': -6.31, 'l2': -4.8, 'l1': -inf, 'Y': -inf}
You can call `model.debug()` for more details.
Then, my second question: Is there an alternative approach that I could use that speed up the sampling time but with some reasonable outcome?