# High dimensional coupled ODEs

Hi everyone!

I’m a computational modeler but relatively new to Bayesian methods and PyMC3.
I was hoping to pick your brain on a problem I’m working on:
(I’m new to the forum so I’m sorry if a similar question was asked before, I haven’t been able to find it yet.)

I have a set of coupled first-order ODE’s (twelve or so). They depend on one another, as well as a set of exogenous variables that I model as constants.
They are mostly in the form of dY(t)/dt = par_1 * Y(t) + par_2* X(t) + par_3 * Z(t) + par_4 * exo_var + …
. The ODEs are therefore linear, but they may become nonlinear if I allow interaction terms during model selection at a later point. The total number of uncertain parameters in the model (par_i) goes up to about 70.

I have some data for most of the variables (500 individuals approx, with several data points over time) to calibrate these equations, but with lots and lots of missing data. And of 1-2 variables, I have no data at all. Luckily, as I understand it, PyMC3 automatically performs Bayesian imputation, and I can simply assign a prior distribution to the variables that I do not have data of and then leave them completely unseen.

Now I wish to estimate the posterior distribution and run simulations with this model that makes reasonably accurate predictions. First of all, I’m wondering: is this feasible at all? Or do you believe I need to make additional assumptions to reduce the parameter space?

Second, I’m aware that there’s a API for ODEs, but as I understand it, it’s quite slow with high-dimensional ODEs. How would you suggest I best implement this model as to make it run efficiently with the NUTS sampler?

Finally, on top of all this, as I mentioned, I also have uncertainties regarding the form of the model (e.g., should I add interaction terms, such as par * Y(t) * X(t) ). What kind of model selection procedure would you recommend I could use together with or in PyMC3? I have looked into Bayesian model averaging but that seems to require that I estimate posterior distributions for every model, which is infeasible as the number of possible model specifications are very large as well.

I would really appreciate any thoughts on this (ambitious) project!

1 Like

Nobody here who can help me? Any suggestions perhaps for other forums to post this question?

I’ve had success with a similar problem (Nonlinear ODE which is a function of order 10 parameters) by using surrogate modeling techniques (Multidimensional gaussian process - #52 by NateAM). I build a response surface capturing some output as a function of my incoming parameters. You could do dimensionality reduction on your outputs using PCA and on your inputs using CCA to make the surrogate model more friendly. I use the surrogate modelling toolbox (SMT: Surrogate Modeling Toolbox — SMT 0.8.0 documentation) as a blackbox likelihood function. I like to model the log-likelihood directly (multivariate input, scalar output) but you may have different needs.

Hi NateAm, thanks for your response! This seems like a useful possibility. Are you saying that attempting to fit a system of ODEs with say, 100 parameters (excluding missing data), is not doable in PyMC3?
Any idea if it could work in PyStan CVODES for instance?

Hi @yunus,

No, I think it’s doable but you might have to play with it to get things working right. A good first step would be to do a sensitivity study on your model to determine which parameters actually matter and which ones you can drop. You could do this using SALib and use your log-likelihood function as the sensitivity function. I’m not an expert on this by any means but, following their example I found success for some problems. For my case I was able to drop the number of Sobal samples from 1000 to 10. You will hopefully find that the log-likelihood isn’t really sensitive to some of the parameters and you can drop them.

Dropping parameters that don’t matter would also be useful in the creation of a surrogate model (response surface) too. You might be able to further reduce the dimensionality of the problem using PCA or CCA as I mentioned above. You would have then re-parameterized the problem into the space where the parameters should be pretty independent and into a (hopefully) smaller dimensionality. From the model form you showed above, I would be pretty worried about whether the parameters could be uniquely defined, especially with the additive exogeneous variable terms but that will depend on your data.

Model selection as implemented in pymc3 will require you to develop traces for each model in question (to my understanding). In principal you should be able to look at the outputs and as the subject matter expert make educated guesses about where the model could be tweaked. I don’t think there is an automatic way to do this.

I can’t say if PyStan would be better able to handle this use case. You can always try it and see what happens. It might be that you will have to brute force it using Metropolis-Hastings if you can’t get NUTS or the Hamiltonian sampler to work. That will be a lot slower but it might also get you off the ground faster.

Thanks for the great input @NateAM. I’m very sorry for my late response. I will definitely try the SA option, I have some experience with Sobol’s method and SAlib already.

Unfortunately I didn’t get much further yet on using either PyStan or PyMC3. I’ve been having some compatibility issues with both. Currently I’m crying to get the sunODE package to work as I prefer the intuitive interface of PyMC3 :).

Thanks again!

Hi yunus, maybe give the approach I outlined here a try: Theano Op using JAX for lightning-fast ODE inference

It doesn’t work with NUTS, for some unknown reason, but it does work with ADVI, and it’s much faster. It’s also convenient in that you can write your ODE in a familiar way. You could do FullRankADVI to capture the (inevitable) correlations between ODE parameters.

@BioGoertz , that’s wonderful! I’m definitely going to give it a shot!

1 Like