I noticed @ferrine & @fonnesbeck pushed to the new pymc4_protoypes
repository, but (very much understandably) the documentation is quite sparse right now.
Just wanted to start this thread to perhaps jumpstart the production of docs that can help others get started when contributing to the new backend.
Also wanted to make sure we/contributors don’t get emotionally attached to a particular backend implementation! (Especially with all of the effort that will go into implementing each backend.)
What are the objective metrics that we are going to evaluate against that everybody will agree upon? I’m happy to pull in a set of performance profiling benchmarks (most likely going to be condensed versions of the examples gallery + my own recipes + stuff from others’ blogs e.g. @AustinRochford).
Hey Eric,
We created that repository to share experimental code to help us decide on a backend. We are currently only in the exploratory phase of that decision, and we hope the wider community will help guide that decision. I don’t anticipate that anything in that repository will end up in the pymc4 code base. If it does, I anticipate it would be moved once we were confident that we were onto something.
So, if anyone is keen to help us explore, feel free to submit PRs to the prototypes repository.
BTW, I think a set of explicit criteria is a good idea. Among the dev team several considerations have been mentioned, but we have not developed them into guidance for making a decision. These seem to include:
- performance
- framework features (e.g. fancy indexing)
- ease of use for developers
- ease of use for end users
- compatibility with current PyMC model specification semantics
- long-term viability of the framework
Please add to/edit this list, as there are sure to be things I’ve forgotten. It’s important for use to consider the range of criteria, as it’s easy to focus on one or two of them, losing sight of the big picture.
4 Likes