# Solving for multiple Poisson sources

I’m trying to infer the (2D) location and strength of multiple Poisson point sources from many observations. I’m a bit unfamiliar with the standard terminology, but I’ve looked into mixture models and Poisson regression for this. Here is a quick MWE.

I create a ‘true’ detector path and set of sources:

``````nposes = 100  # number of measurements
s = np.arange(nposes)
x = 6.0 * np.cos(2*np.pi* 0.07 * s)  # detector x positions
y = 6.0 * np.sin(2*np.pi* 0.05 * s)  # detector y positions
t = np.ones(nposes) * 0.25
x0_true = np.array([0.1, 3.5])  # source x positions
y0_true = np.array([-2.5, -4.5])  # source y positions
A_true = np.array([100, 120])  # source strengths
eta_true = 0.20  # detector efficiency
counts_mean = forward_project(A_true, t, x, y, x0_true, y0_true, eta_true)  # Poisson mean counts
counts_obs = np.random.poisson(counts_mean)  # observed data
``````

here the `forward_project` function is just 1/r2, detector efficiency, measurement time, etc:

``````def forward_project(A, t, x, y, x0, y0, eta):
'''Forward project counts from sources at x0, y0 to x, y.'''
t = t[:, None]
dx = x[:, None] - x0[None, :]
dy = y[:, None] - y0[None, :]
r2 = dx**2 + dy**2
c = A * 37e3 * eta * t/(4*np.pi*r2)  # 37e3 Bq / 1 uCi
return c.sum(axis=1)
``````

I can’t show the counts and trajectory figure as a new user, apparently, but it’s easy enough to confirm that this produces a Lissajous pattern in X and Y and a counts vs pose plot with a handful of spikes in the O(10k) counts range.

Then I define my model:

``````import pymc3.distributions.transforms as tr

with pm.Model() as model:
# define priors
x0 = pm.Uniform(
'x0', -10, 10, shape=x0_true.shape, transform=tr.ordered, testval=np.array([0, 1])
)
y0 = pm.Uniform(
'y0', -10, 10, shape=y0_true.shape, transform=tr.ordered, testval=np.array([0, 1])
)
A = pm.TruncatedNormal(
'A', mu=110, sigma=30, lower=0, upper=200, shape=A_true.shape,
transform=tr.ordered, testval=np.array([105, 115])
)

# define model
mu = forward_project(A, t, x, y, x0, y0, eta_true)

# define Poisson likelihood for n counts given mean vector mu
n = pm.Poisson('n', mu=mu, observed=counts_obs)
``````

and sample it like so:

``````with model:
trace = pm.sample(20000, tune=10000, cores=2, target_accept=0.90, random_seed=999, init="adapt_diag")
``````

But the traceplot results are pretty terrible, getting the x0, y0, and A all incorrect:

I can confirm that this works perfectly well when I only use one source in the true and inferred model, e.g., just the last source. Again I can’t embed the plot, but the results are pretty good, though maybe a bit biased. The MCMC forward-projected counts agree quite well with the observed counts, too.

So it seems to be the addition of the second source that is ruining the results. Does anyone have any tips on how to improve things?

A few other notes:

• this is very much a work in progress, so I’m open to changing priors, efficiency tips, etc
• I have played around with varying the number of samples, to not much avail
• I’ve tried the `tr.ordered` pattern to avoid some earlier-observed ‘label-switching’ behaviour. This also required the `init="adapt_diag"` to work
• the `testval` are arbitrary but required by `tr.ordered`
• The `A` prior is a `TruncatedNormal` as I earlier had some issues combining a `Normal` prior with a non-negative `Poisson` likelihood
• I also tried the long way of defining multiple scalar values, one for each source, e.g. `A_a`, `A_b`, and same for the coordinates, to avoid any shape issues. This also had similar convergence issues.

Results are substantially improved by only applying the `tr.ordered` to one variable, e.g., the `x0`: However in the future I may want to model e.g. a line of sources all with the same `x0`. Is there a more general way I can order my variables?