Thank you for the reply.
In my code, I have two sensors where S2 is the weak sensor and S1 is the strong sensor, and what I am dealing with is same case as the Kalman Filter in 1 Dimension since I am dealing with 1D data.
So, I created Bayesian Fusion to follow the steps below:
- get the first data of S2 and getting their trace of the priors.
('Designing the Bayesian PDF for initial Sensor 2:') - Use the priors trace to create a predictive priors based on the next data of S2.
('Designing the Bayesian PDF for predictive Sensor 2:') - Update the distribution by using the predictive priors from the previous step into S1 and obtain the corrected custom distribution.
('Designing the Bayesian PDF for correction Sensor 1:')
I created this code following the same style as the in Kalman Filter in 1 Dimension since I am dealing with 1D data and changed the approach to fit in the Bayesian package (which is PyMC3).
mean = mean0
var = var0
plt.figure(figsize=(fw,5))
for m in range(len(positions)):
# Predict
var, mean = predict(var, mean, varMove, distances[m])
#print('mean: %.2f\tvar:%.2f' % (mean, var))
plt.plot(x,mlab.normpdf(x, mean, var), label='%i. step (Prediction)' % (m+1))
# Correct
var, mean = correct(var, mean, varSensor, positions[m])
print('After correction: mean= %.2f\tvar= %.2f' % (mean, var))
plt.plot(x,mlab.normpdf(x, mean, var), label='%i. step (Correction)' % (m+1))
plt.ylim(0, 0.1);
plt.xlim(-20, 120)
plt.legend();
Based on your experience with pymc3, could any of the reasons that I mentioned would affect my results in the attempt of creating Bayesian fusion?