I’m not sure that “Bayesian genetic algorithms” is a real term or even a concept in research. For a while now, I’ve preferred genetic algorithms as my go-to heuristic optimization technique, as they’re relatively easy to implement, can produce an output relatively quickly, and generally get good (or not bad) decision recommendations. Concurrently, I’ve been using Probabilistic programming to estimate things like per channel marketing budget allocation, inferring parameters such as time-to-effect and saturation.
Previously, I would use MLE results and linear programming to arrive at optimal decisions; however, I’ve found that this neglects the variance component of these parameter estimates. But by using GAs in conjunction with a “stochastic reward function” (a function defined by parameter estimates sampled from the posterior chain) I’m able to account for this variance; I’ve been generally pleased by the results.
When using genetic algorithms with MLE estimates, the algorithm will generally converge and stay put, as consecutive steps away from a local optimal will be necessary to reach another local (or the global) optima. However, a stochastic reward function, (in my experience) keeps the algorithm “jumping” throughout iterations. Somewhat like MCMC sampling, after a warm-up period, the rewards found in each iteration seem to bounce around a certain value. And this would give us a probabilistic estimate of the maximum expected reward.
Is anyone else doing this? Or something similar?