About the speed of sampling

I saw some people were asking about the speed. Does it mean that the value of the iteration rate can be seen as an indication if the sampling is efficient or not? I thought if the model is complex, then it should also take longer? (But do I understand it right?) Or when I see the rate is decreasing to a certain degree, say, 5 it/s, should I then consider looking at my model for modification?
image

Another question, ’ If the model is well parameterised, then the sampling should be able to find the train faster’, is it a fair statement?
Thanks a lot. :slight_smile:

In terms of speed, a more appropriate index is the number of effective sample per second. You can sample using Metropolis very fast, because the random walk kernel is fast to evaluate. However, that doesnt mean you get a lot of effective samples. In fact, in high dimension, random walk is near certain not going to do well.
As a comparison, the kernel used in NUTS take advantage of the geometry, thus sample the posterior much more efficient (see eg We were measuring the speed of Stan incorrectly—it’s faster than we thought in some cases due to antithetical sampling | Statistical Modeling, Causal Inference, and Social Science)
So dont read the iteration rate at face value as a measure of how fast you get sample: it is the effective sample that matters.

It is also not necessarily the case that more complex model would take longer to sample. An example would be if you put hyperprior on the model parameters of a large linear regression system, you can sample faster some times. Because additional information from the hyperprior informed the sampler where the posterior weight is more likely concentrate.

You usually saw a speed up after tunning, because the parameters are at a more optimal state now. The slowdown you posted in the picture happens during the tunning, it is probably fine.

I think it is generally the case, Andrew Gelman called it The folk theorem of statistical computing