CausalPy - Evaluating uncertainty for Interrupted time series

Hello, PyMC community!

First of all, thanks a lot for CausalPy and including so many designs to it. I’ve got a couple of questions about Interrupted Time Series design and the Intervals it’s providing and would appreciate your help understanding a couple of points.

1. Please suggest theory on why we can sum up intervals to get cumulative ones
I can see that to calculate cumulative impact of the intervention, the package basically sums up the differences between the actuals and the predictions.
Same is done to calculate cumulative intervals - upper and lower bounds of each prediction are summed up (or the posteriors standing behind them, potentially).
While I understand that it seems an obvious way to do it and is often done in the industry (Causal Impact does it same way, afaik), can you please suggest some theory on why we actually do can sum up intervals. Is it possible in Bayesian setting or in frequentist setting as well?
Are there other situations where adding the intervals (or posteriors) rises up?

2. Why do Confidence Intervals grow with time?
In Interrupted time series example page
there’s a graph showing that cumulative intervals grow with time. Can you please help me understand why this happens?
It seems that the model is a regular Linear Regression, predictions are made for each day separately and probably have same confidence intervals. In other words, having looked at the code, it seems that the model seems to not know that it’s dealing with time. Why is it that interval is growing with time?

3. Confidence Intervals vs Prediction Intervals
In frequentist contexts there are two different intervals - confidence interval (which tell where our betas should be) and prediction intervals (where we expect our predictions to be) (more here).
Prediction Intervals are by definition always wider.

Can you please say whether such concepts exist in Bayesian framework and which concept is closer to what is being used in Interrupted time series?

Very much appreciate your help!

Hi @aabugaev, thanks for the interest in CausalPy. I’ll have a stab at answering your questions. I’ll just drop the image from the page you linked to make this easier.

1 - Cumulative causal impact

There’s a slight misunderstanding in what’s happening here. We aren’t in fact summing intervals to get cumulative ones.

If we were in point-estimate land then you’d have just one time series. To calculate the causal impact you simply subtract the predicted time series from the observed time series. Then calculating the cumulative causal impact would simply be a matter of calculating the cumulative sum of these differences.

The only thing we are doing differently here is to apply that exact algorithm to each MCMC sample. So now, rather than having 1 predicted time series, you might have 1000 for example.

This is done with this line here:

self.post_impact_cumulative is an xarray object and you can read the cumsum API here. So we are effectively just doing the same cumulative sum operation on a single time series that you’d do in point-estimate land, but we’re just doing that for all the MCMC samples.

2 - Why do Confidence [credible] Intervals grow with time?

Under the Bayesian approach we have credible intervals, which are different from confidence intervals, but I’ll leave that for your third question.

You are right to say that this is currently just linear regression so there is no explicit time series modelling going on here. So what you’d expect to see is no increasing uncertainty in the actual post-intervention estimate and causal impacts (top and middle plots), and that’s exactly what we see.

There is a plan to add actual time series modelling here, in which case you would expect to see increasing uncertainty into the future.

Anyway, the point is that we are seeing in increase in the cumulative causal impact. When you remember that what’s going on is simply cumulative sums independently for all MCMC samples, then it becomes pretty intuitive. But let me know if that is enough to make the penny drop or not.

3. Intervals

Time is a bit short on my side, so I can’t give a whole primer here. Rather than confidence intervals, Bayesians use credible intervals. And yes, there is also the Bayesian posterior predictive distribution. In short, this doesn’t just represent the expected value, but also takes the observation noise / likelihood distribution into account and can be thought of as a prediction of what you are likely to see next.

I realise I didn’t answer this question. At the moment we just calculate the posterior predictive distributions and make all the MCMC samples available to the user. So at the moment we’re just letting the user draw conclusions from posterior distribution etc.

1 Like

Hey, @drbenvincent!
Thanks a lot for your response, appreciate it!
This sheds much light to how the approach works, but I would follow-up with one question, if you do not mind.

Regarding the cumulative intervals growing with time:
Would it be correct to say that, if posterior predictive distributions (would theoretically) have absolutely the same distribution, cumulative interval would not widen with time, right?

PS. Just after reading your answer I came across this article of yours.
I would recommend it to anyone coming to this thread.

1 Like

Also, @drbenvincent, you mentioned that you are planning to add uncertainty related to time series. Is it something you’re planning to add to LinearRegression model or to some model that takes autocorrelations to account?

I’ve not done any simulations to confirm, but that sounds about right.

1 Like

It would be a new timeseries model, like an AR or BST model for example. Just waiting to get the time to do it :slight_smile:

1 Like

Hey, @drbenvincent!

Thanks again for your previous responses once again, they’re very helpful.

I have one more question and I thought it would be better to proceed in this thread than to start the new one.

I can see that CausalPy uses the mu to calculate post_impact and post_impact_cumulative.

post_impact and post_impact_cumulative are also used to calculate credible intervals in the examples for CausalPy (so uncertainty seems to be based on mu as well).

  1. Can you please help me understand, if there’s any theoretical background to using the mu and not the y_hat here?
  2. Does the difference between using mu and y_hat for credible intervals has the same interpretation as Confidence Intervals (mu) and Prediction intervals (y_hat) in frequentist context?

Appreciated!

Hi @aabugaev. Sorry for the delayed reply - I have a young child so life is more hectic than it was!

So I think you’ve actually found a bug. I’ve created an issue here Ensure causal impact is calculated from posterior predicted observations, not expected values · Issue #294 · pymc-labs/CausalPy · GitHub which I’ll get to.

Hey, @drbenvincent ! No worries!
Thanks a lot for the feedback and your previous answers. I am inspired to see where CausalPy goes!

1 Like