I will soon have some time in my hands; I thought about rolling my own framework based on Rust, but after some reflexion I am not sure it will bring something new (perhaps clarifying my thoughts about what a good probabilistic language should look like), and I think it is an unnecessary a duplication of efforts. I am just going to throw here a few things I would have liked having when I was using PyMC3, and maybe some of these you would like me to investigate and try to integrate in the library:
- Iterative sampling a-la deep learning. I may have missed a functionality, but I have always found the fact that inference needed to proceed in batches frustrating. I have a little experience with deep learning, and the ability to plug in tensorboard to get metrics while training is just so useful. There may be a good reason why we don’t want that here, but being able to sample iteratively would be a big plus for me.
- Being able to re-start inference from a checkpoint. Again, might have missed something.
- Continual learning. I know this is a tough one, but isn’t this the big claim of Bayesian statistics?
Let me know if there is something you find particularly interesting/that you would like to integrate in the library. I’m also happy to help with something else.