Long Sampling time | sample from batches?

Does it make sense to pm.sample from batches of the data and afterward concatenate the traces?

There’s the functionality of pm.Minibatch (illustrated here). But it likely be recommended only in specialized circumstances (e.g., if you’ve already ditched MCMC and are using VI). There are lots of ways to try and speed up models without resorting to these sort of approaches. You can try a different backend (e.g., JAX) or you can move things to a GPU, etc.

In addition to what @cluhmann said:
Nobody can stop you from doing that, and it might even be useful to do so in some special circumstances. But you are not getting draws from the posterior anymore then. Think about the uncertainty of a parameter that is informed by each entry in the dataset: If you sample with batches, you end up with a much higher uncertainty in each of those subsets that you would get with the combined dataset.
The first thing (way before thinking about GPUs) if I have performance issues is usually to look at the model a bit, and figure out if maybe you could speed it up by reparameterizing something. Often that can give you huge speedups. You can look at the tree size as a quick indication to see if that might help (in the sample stats). The number of gradient evaluations is 2 ^ tree_size, so if your tree size is large, you have to evaluate a lot of gradients, and improving the parametrization often lowers that number.

2 Likes