Is it possible to simultaneously calibrate multiple StateSpace models?

I have a statespace model I’d like to simultaneously fit to multiple cohorts (some parameters are shared across cohorts, and others are cohort-specific). I was hoping I could do something like…

mod_0 = SSModel(cohort=0)
mod_1 = SSModel(cohort=1)
mod_2 = SSModel(cohort=2)

with pymc_mod:
   mod_0.build_statespace_graph(data=data_0)
   mod_1.build_statespace_graph(data=data_1)
   mod_2.build_statespace_graph(data=data_2) 

…and that it would work like a model with multiple likelihoods (e.g. like this).

Unfortunately, it looks like “data” is not “observed”. It creates a variable name data, and you can only have one.

Is there an easy way around this?

I know that I could in principle write one giant statespace model that keeps track of the cohorts simultaneously, but I was really hoping to avoid that.

There’s an open issue here to add a model name prefix to models to enable this. It was previously blocked but no longer, we should prioritize getting it in.

If the meantime if you need this right now I can think up a hack and post something. The offending function is here, so an easy hack locally would be to clone extras, pip install it as editable, then add a data_name argument to build_statespace_graph that forwards to register_data_with_pymc.

If you check the PR I linked there might be other places this pops up.

I’ve updated the PR following changes that were made in the how the state space stores its variables. The new PR on this is ready for merging (I think) and can be found here: Named StateSpaceModel v2 – rebase and updates after #607 by opherdonchin · Pull Request #654 · pymc-devs/pymc-extras · GitHub

1 Like

Very nice to see the progress on this! Would it be possible to get a rough ETA? Would you say it’s a matter of days, weeks, months? I’m trying to decide if I should come up with my own hacky interim solution or not.

its actively being worked on, but this is a FOSS project run by volunteers. Calibrate expectations accordingly.