Google Colab is a Jupyter Notebook-based cloud service which gives free GPU use. I am wondering if anyone of you managed to get a working PyMC3 environment with GPU support. I struggle to get all the dependencies right (CUDA, pyGPU, etc.). I have a tricky model, and I would like to see if the GPU speeds things up a bit.
Do you have a colab to show the things you had already tried?
Here’s the ipynb notebook Theano_with_GPU.txt (401.0 KB) (please, change the extension to .ipynb, otherwise I could not upload the file).
It is sort of working now. The Theano test shows that the GPU is in use, but the sampling is much slower compared to using the CPU only (we are talking about 1297 draws/sec with CPU compared to 270 draws/sec with GPU). PyGPU still complains that the CUDNN is not setup correctly, so maybe that’s why it is so slow. Maybe you can start from this example. I also noticed that the PyMC3 works only with 1 chain when the GPU is in use.
I started with the assumption that Colab fullfilled all the CUDA requirements. At first, I tried to patch all the errors that PyGPU was raising, trying to find the missing files in the system directories. But then more errors were raised. Therefore, I ended up reinstalling CUDA from scratch. The thing that is missing is a cudnn.h file that I cannot find.
Regardless of the colab setup: as far as I know/have read most PyMC3 models generally do not see an increase in (sampling) speed when using a GPU instead of CPU.
My experience is that, when using the GPU, I need to set
cores=1 when sampling.
I also tried colab unsuccessfully. It seems that it doesn’t support theano! Potentially because we can’t setup the theano flags.