Hi folks!

To compute sampling efficiency in ESS/sec, I’ve been using the `"sampling_time"`

attribute in inferencedata alongside arviz’s ESS calculation for the posterior samples (tuning excluded). Essentially,

```
az.ess(idata.posterior) / idata.posterior.attrs["sampling_time"]
```

However, I’ve realized that the `"sampling_time"`

attribute encompasses both tuning and non-tuning steps of MCMC. I think that sometimes this isn’t the most helpful metric, because there isn’t always a fair comparison between tuning phases for the MCMC runs in question. For example, perhaps MCMC runs have different numbers of tuning steps, or different model parameterizations or initialization strategies yield faster sampling during tuning. In these cases, I’d like to be able to estimate sample efficiency around the “typical set” without being concerned with how the sampler got there.

Is there currently a way in pymc to calculate ESS/sec where the denominator is sampling time *only for non-tuning steps*? More broadly, I’m curious if my logic makes sense to those with more expertise, and what the best-practices recommendation would be. To be clear, I do understand that using a denominator of sampling time for the *whole* MCMC run (tuning and non) is a better comparison metric in many cases. But my thought is that it can be misleading in some cases.

Thanks in advance!