I have two machine, one with linux and one with windows. I want to use GPU for running my pymc3. I couldn’t find good documentation to explain me how to do that. Can you let me know how to do that?
Thanks!
You need a proper theano installation with support for the GPU first. (See the theano doc about that). Then you can move parts of the computation by using foo.transfer
as described here. Right now you can not run nuts itself on the gpu however, so this is usually only worth it if you have large datasets (use a shared
variable on the gpu), and some expensive operations (eg dense matrix related stuff).
Great. Thanks! Is there any sampler that we can run on GPU? or all others are like NUTS?
None of them run on the GPU right now.
Thanks, and the last question. Is the plan for pymc4 is to run it on GPU more user friendly?
Yes, it should get better for pymc4. It is not obvious which models will run faster on the gpu at all however. I might be wrong about this, but I would expect a lot of models to be faster on a CPU even when we have really good GPU support. You just need a lot of data until you can keep the high number of cores busy.
What kind of model do you have?
My model is similar to regression models. But I was thinking maybe we can have sampler performance faster on GPU, but not sure. Maybe GPU is only better for huge datasets.