Strategies for speeding up power analyses

The question you’re asking is meaningful, in that you can calculate it and it will give you a meaningful answer. However, if you’re doing A/B testing on a website or something similar and want to figure out how big a sample size you’ll need, I would actually suggest taking advantage of stopping-rule independence. Bayesian statistics don’t actually require the sample size be fixed ahead of time, so you can just collect data until your estimates have the necessary precision. If you collect 1000 samples and you find that’s not enough for your purposes (e.g. the precision doesn’t let you rule out some meaningfully large effect sizes) you can just keep collecting more data until you have enough information to be able to make a decision.

This might be a bit counterintuitive if you’re used to frequentist statistics, so here’s a good way of explaining stopping rule independence: What’s the probability of getting 10 successes in 60 trials if you weren’t planning to keep going? Now, what’s the probability of getting 10 successes in 60 trials if you were planning to keep going? As long as your plans don’t affect the likelihood function (e.g. you’re not intentionally trying to take measurements only when a lot of people are visiting your website) they don’t affect the probability.