Strategies for speeding up power analyses

In terms of Power Analyses for Bayesian approaches, from my very limited experience NHST is quick operationally and pretty clear cut for decision making and figuring out timelines in industry - I have yet to have someone show me a better approach apart from bandit algorithms (though I’d love to see one).

I’ve also found its difficult to find the “right” priors due to seasonality and co-interventions so using an NHST approach with a bayesian model feels like a slightly more conservative approach that matches my experience level.

I’m not sure what your exact use case is, but NHST doesn’t sound like it’s a good fit for what you’re looking for. Usually, if you’re having difficulty setting priors, the best response is to use a hierarchical model. A hierarchical model will learn its priors from the data. If hierarchical priors don’t help, and your priors still have a lot of influence on the outcome, this usually indicates that you don’t have enough data; NHST won’t help with that. What exactly are you trying to model here, and can I take a look at your model?

As for being clear-cut, this is a major problem with NHST, not an advantage. With NHST, everything gets rounded off to either “Accept the null hypothesis” or “Reject the null hypothesis.” This is bad practice: p-values of .049 and .051 are not meaningfully different from each other.

With all that said, here are Bayesian summary statistics that might be useful for what you’re looking to do.

  1. Signal-to-noise ratio. The signal-to-noise ratio is equal to the expectation of the squared effect size, divided by the variance in your estimate. A signal-to-noise ratio lets you estimate how much of the variance in your estimates will be caused by your estimates improving over time, and how much will be caused by random chance. For instance, a signal-to-noise ratio of 3:1 indicates that 25% of the variance in your estimates will be explained by random chance.
  2. Pick an effect size that you think is “meaningful” enough that you care about any effect sizes larger than it. Then, calculate the probability that the effect size is larger than that effect size. If that probability is very small, you can safely say you’ve ruled out meaningful effect sizes. (Or, alternatively, calculate the probability that the real effect is smaller than this threshold, and then see whether you can confidently say the effect size is meaningfully large.)
  3. Bayesian p-values and u-values are good for assessing model fit and prior fit. Extreme p-values (far from 0.5) indicate that your priors and model have a bad fit to the data. If all your p-values are close to 0.5, don’t worry too much about your priors. (Make sure to calculate the p-values based on several statistics of your data. Skew and kurtosis are the two classics, but I also suggest something like the ELPPD that can take into account holistic model fit.)