https://groups.google.com/forum/#!topic/theano-users/7Poq8BZutbY
FYI time to plan for the future.
https://groups.google.com/forum/#!topic/theano-users/7Poq8BZutbY
FYI time to plan for the future.
Hi all - Iāll be missing the meeting tomorrow. Iāve a work meeting at the same time My thoughts are aligned with TF - due to developer mindshare. Iām also ok with the notion of using another backend or multiple backends - to allow us to handle Open Source deprecation risk. cc @ferrine @colcarroll @twiecki @fonnesbeck
I am not really familiar with TensorFlow or PyTorch, hence I canāt say anything about which one should be a good replacement for Theano. But I have a question. Could autograd be and alternative to Theano? One of the pain-points for users of PyMC3 is having to deal with Theano (erros, scan, integrating external functions, etc), using autograd should reduce these problems because everything could be written using just NumPy. Numba could also help to accelerate things if needed.
Hi,
I just wanted to come and say that this is interesting option, considering that numba has GPU support. Could this then also use dask/xarray to optimize calculations?
I think that TensorFlow is probably the right long-term answer, although we may need to work with them to improve it in some ways first.
There is mxnet option too. We can think about. It is supported by Amazon
Except for Numba, autograd+GPU is what pytorch offers, all similar to working with Numpy. I have limited experience with deep learning in pytorch but itās definitely very close to the Numpy experience. And it also has the possibility to add external functions. I donāt know how much the pymc3 guts would have to be changed in this case.
It looks like numba and autograd donāt play well together: https://github.com/HIPS/autograd/issues/47
I would put in a vote for PyTorch because:
Iāve not touched the Theano guts of PyMC3 before, so my opinion should be taken with a grain of salt. That said, I have implemented simple DL algorithms with autograd, and I think that the ability to do flow control (conditionals + loops) in autograd-like frameworks (PyTorch, Chainer) make them very attractive for implementation of complicated algorithms.
Hi all - first time posting here but Iāve been using PyMC3 for a lot of my thesis work and would really like to help out.
This could be a blessing in disguise - I advocate quite a lot for PyMC3 in my research group but the need to write ops to integrate code already written in numpy/scipy is a big hurdle.
Is there a list of potential replacement options and a āspec sheetā that they need to fulfil anywhere? As I see it we currently have:
Happy to do some work evaluating these possibilities, but not sure what a āminimum viable samplerā would be.
Dear all,
We recently opened a new Github repository for the experiment of new backends.
If you have idea or code snippet, please send a PR!
Theano is open source. I hope (actually, Iām very confident) that someone else will countinue to develop and maintain it.
Btw, in the choice of a new backend for pymc3, I wonder if speed issues are considered.
@mcavallaro Donāt worry, we will consider speed
For now we are mostly trying to properly understand the different frameworks, and build some prototypes, so that we can make an informed decision.
Certainly speed is a big factor. Although I personally wouldnāt rank it at the top. Mainly because it can change once we decide on something. TensorFlow for example is pretty slow, but when XLA is merged that might change.
autograd
is not sufficiently performant to use as the computational backend for PyMC. Itās mostly useful for prototyping. From what I hear, PyTorch was influenced by autograd
, however.
I wrote a question with some issues I encountered in MXNet on their forum: https://discuss.mxnet.io/t/moving-pymc3-from-theano-to-mxnet/86/1
I should also add this point: