How Turing.jl compares to PyMC3?

I haven’t used Julia much, and can’t speak about speed comparisons at all, I’m also not very interested in speed comparisons between libraries. But I want to note that at ArviZ we are also working on interopreability. It is already possible to run inference in Turing save as netcdf and ananlyze in python, or the other way around if someone wanted to do that.

And that is not all, thanks to the already amazing ArviZ-PyMC3 integration, you can run inference with Turing, Stan or whatever and then use PyMC3 to sample from the posterior predictive using posterior samples from an arbitrary inferencedata, you only need variable names to match between inferencedata and pymc3 model.

I also want to note the ability to use named dimensions to specify pymc3 models (which will be even more flexible in v4) which is something no other PPL allows as far as I know. For me it’s a huge pro as it helps in me writing the model for the first time, me explaining my model to other people and me understanding other people models or even my own model after some months/years.

3 Likes