Bayesian approach for positioning

Dear All,

I’ve been struggling to replicate the results from Bayesian Indoor Positioning Systems. To do that, I tried to use an approach previously discussed in Model multivariate normal with separate means, dimension mismatch error, since It’s well known that the measurement vector for the Received Signal Strength (RSS) case follows a multivariate normal distribution with mean -\beta\log\left[ \sqrt{\left( x - x_i \right)^2 + \left( y - y_i \right)^2} \right] and covariance matrix \textrm{diag}\left( \sigma_1^2, \sigma_2^2, ..., \sigma_n^2 \right), where \beta is the path loss exponent (an environment specific parameter), (x, y) is the target (unknown) location, (x_i, y_i) is the anchor (known) location.

The original reference by D Madigan’s uses WINBUGS and its corresponding plate notation is


Where (X, Y) is the target position, D_i is the separation distance between the target and i\! th anchor node, b_{i0} and b_{i1} are hyper parameters and \tau_i is the respective variance.

My code excerpt considering four anchors column-wise follows:

with pm.Model() as pos_model:
    α = pm.Normal('α', mu = 0, sd = 100, shape = 1)
    β = pm.Normal('β', mu = 0, sd = 100, shape = 1)
    μ0 = α - β*tt.log(shared(d_hat[:, 0]))
    μ1 = α - β*tt.log(shared(d_hat[:, 1]))
    μ2 = α - β*tt.log(shared(d_hat[:, 2]))
    μ3 = α - β*tt.log(shared(d_hat[:, 3]))
    σ = pm.HalfCauchy('sigma', 1, shape=4)
    cov = np.identity(4)
    μ = tt.stack([μ0, μ1, μ2, μ3]).T
    joint_obs = pm.MvNormal('joint', mu=μ, cov=cov, observed=ρ_hat)

with pos_model:
    trace = pm.sample()

Please, notice that d_hat and ρ_hat correspond to the simulated values and used as observed.

Even though this code works and allows me to estimate α and β, I can’t find a way of estimating the position of the unknown target node, namely (x, y). I’ve tried to use Deterministic, Potential and Stochastic classes (as in Observing functions of random variables in PyMC), but couldn’t work out the multivariate case.

Any suggestion is really appreciated.

What do you mean by unknown target node? I am not clear where the observed (x, y) are modelled in the `pos_model``

Thanks a lot for promptly replying.

Sorry, it’s unclear, but that’s the point I couldn’t connect the model to implementation… I’ll try to elaborate on that. Actually, I intend to do something like this

X \thicksim \textsf{Uniform}(0, L)
Y \thicksim \textsf{Uniform}(0, W)
D_i \thicksim \sqrt{\left( X - x_i \right)^2 + \left( Y - y_i \right)^2}
\mu_i \thicksim \alpha + \beta \log(D_i)
\alpha \thicksim \textsf{Normal}(0, 100)
\beta \thicksim \textsf{Normal}(0, 100)
\sigma^2 \thicksim \textsf{HalfNormal}(10)
\rho_i \thicksim \textsf{Normal}(\mu_i, \sigma^2)

So that, inside the sampling algorithm over the updating iterations (I guess)

\displaystyle X_{(k + 1)} \thicksim \Pr(X)\prod_{i=1}^N \Pr \left( \rho_i \vert Y_{(k)}, \sigma^2_{(k)}, \alpha_{(k)}, \beta_{(k)} \right)
\displaystyle Y_{(k + 1)} \thicksim \Pr(Y)\prod_{i=1}^N \Pr \left( \rho_i \vert X_{(k + 1)}, \sigma^2_{(k)}, \alpha_{(k)}, \beta_{(k)} \right)

Then, the corresponding conditional distributions would be written as
\displaystyle X_{(k + 1)} \thicksim \exp\{-\frac{\sum_{i=1}^N \left[ \rho_i - \alpha_{(k)} - \beta_{(k)} \log(D_i)\right]^2}{2 \sigma^2_{(k)}}\}, \text{with } D_i \thicksim \sqrt{\left( X - x_i \right)^2 + \left( Y_{(k)} - y_i \right)^2}
\displaystyle Y_{(k + 1)} \thicksim \exp\{-\frac{\sum_{i=1}^N \left[ \rho_i - \alpha_{(k)} - \beta_{(k)} \log(D_i)\right]^2}{2 \sigma^2_{(k)}}\}, \text{with } D_i \thicksim \sqrt{\left( X_{(k + 1)} - x_i \right)^2 + \left( Y - y_i \right)^2}

I tried to implement that as a multilevel model (as discussed in A Primer on Bayesian Methods for Multilevel Modeling), but it seems to be stuck to the priors because I always get (50, 50) as an estimate to (X, Y) when considering L = 100\! m and W = 100\! m.

Hmm, you seems to describing some kind of Bayesian filter, which we dont currently have any example of. You might find https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python helpful.

Hi,

Thanks a lot for your quick response… And for pointing directions.