# EulerMaruyama in v5 with multivariate timeseries

Hi there,

I have two time series x and y generated by an ODE: \dot x = f_x(x, y, a) and \dot y = f_y(x, y, a) where a is some parameter.
I have access to noisy observations of x: z \sim N(x, \sigma). Let’s suppose for simplicity that I know \sigma.
To infer a from z I thought I could use a simple hierarchical model:
a \sim SomePrior()
x_0 \sim Normal(0, 1)
y_0 \sim Normal(0, 1)

x_i \sim Deterministic(EulerStep(f_x, x_{i-1} , y_{i-1}, a, dt))
y_i \sim Deterministic(EulerStep(f_y, x_{i-1} , y_{i-1}, a, dt))

z_i \sim Normal(x_i, \sigma, obs=some Obs Of Z)

Implementing it with PyMC is not difficult, but for 30 timesteps is extremely slow to initialize the sampling, and for 100 timesteps it gives a “max recursion error”. However, the model does a decent job to recover x and y ( a inference is much worse, but I guess it is related to the small number of samples I can use without breaking the sample function).

Is there a way to make the model run in a reasonable amount of time? My ultimate goal would be to use more data, more equations, and more parameters, but it doesn’t seems feasible this way.

Secondly, the model sometimes throws some divergences (less than 100), might it be related to a suboptimal prior for the parameters? I guess the problem cannot be the model design, because it is exactly the same model that generated the data, I’m only trying to fit some stuff.

Anyway, digging into the timeseries documentation on the website, I found the EulerMaruyama class which, if I understand well, would allow me to do for a SDE the same things I’m trying to do for an ODE. Naively, I could think of my ODE as an SDE with extremely low noise. If the timeseries is not chaotic it is a reasonable assumption. So, inspired by this notebook that was for V3, I tried this code:

from scipy.integrate import solve_ivp
import numpy as np
import matplotlib.pyplot as plt
import pytensor.tensor as at
import pymc as pm

A = 0.08
B = 0.6
NL = 0.1
TIMESTEPS = 200

def f_x(x, y, a):
return -x + a * y + x ** 2 * y

def f_y(x, y, a, b):
return b - a * y - x ** 2 * y

def f_sde(x, a, b):
return at.stack([f_x(x, x, a), f_y(x, x, a, b)], axis=0), 1e-10

def f(t, x):
x, y = x
return [f_x(x, y, A), f_y(x, y, A, B)]

# generate true time-series
time = np.linspace(90, 100, TIMESTEPS)
dt = np.diff(time)
traj = solve_ivp(f, (0, 100), y0=[1, 1], t_eval=time)

#generate observations
obs = traj.y.T + np.random.normal(loc=0, scale=NL, size=traj.y.T.shape)

with pm.Model() as model2:
a = pm.Gamma("a", 1, 1)
xy = pm.EulerMaruyama("xy", dt, f_sde, (a, B), init_dist=pm.Normal.dist(mu=[obs[0, 0], 1], sigma=2), shape=(2, TIMESTEPS))
z = pm.Normal("z", mu=xy[0, :], sigma=NL, observed=obs[:, 0])



sampling from this model to generate a trace gives tons of divergences and countless warnings. Some examples:
UserWarning: Moment not defined for variable xy of type EulerMaruyamaRV, defaulting to a draw from the prior. This can lead to difficulties during tuning. You can manually define an initval or implement a moment dispatched function for this distribution.
The rhat statistic is larger than 1.01 for some parameters. This indicates problems during sampling. See https://arxiv.org/abs/1903.08008 for details
The effective sample size per chain is smaller than 100 for some parameters. A higher number is needed for reliable rhat and ess computation. See https://arxiv.org/abs/1903.08008 for details
There were 2677 divergences after tuning. Increase target_accept or reparameterize.

Moreover, the inference is completely off.

So I guess I’m doing something wrong, but I have no clue about what. I tried to look at the docs of the EulerMaruyama class, but they are quite succinct. I tried to look at the forum as well, but the majority of the posts refer to v3 and the class seems to have changed a bit since that version, or at least the notebook I mentioned doesn’t run fine on my laptop with v5.

Thank you very much for the help