Value of many cores?

Howdy,

I use Spark a lot for distributed computing over many nodes/cores. I was wondering if there would be any advantage to distributing the sampling process over a Spark cluster.

In this thread there’s discussion about how to re-use the same tuning for multiple chains. It’s not clear to me if having access to many cores in this instance has really any practical advantage - is running many short chains equivalent to running a few long chains? Are there other ways to parallelize the model over many cores that would reap tangible benefits?

Thoughts?

Hi Jared,

You always need tuning (you might get by with less tuning if you have a better mass-matrix as initialization). So those are an overhead when you run many chains. But after tuning is complete you get a multiplier on your chains, so you get the same number of samples if you run one chain with 1000 post-tuning samples or 10 chains with 100 post-tuning samples.

Thanks Thomas - the thread I linked was showing a way to do tuning once so that you can pass the results to multiple chains to reduce the overhead.

Oh cool, I hadn’t seen that, much easier than I thought. I suppose you miss a bit of robustness if they all have the exact same init. Perhaps one could first sample N tuning chains and then sample N*10 sampling chains. Although if you already have the cores, why would you let all N-1 cores sit idle while one is tuning?