# Calibrate model of time series, likelihood, idd hypothesis

Hello
I’m trying to calibrate a model of time series.
I’m using a normal likelihood, like in the following simple illustration:

``````import pymc3 as pm
import numpy as np

x=np.array([0,5,10])
data=x*10+2
with pm.Model() as model1:
var = pm.Normal('var', mu=6, sd=3) #prior
mu = pm.Deterministic('mu',10*x+var)
y_obs = pm.Normal('y_obs', mu=mu, sd=1, observed=data)
trace = pm.sample(5000, tune=1000, chains=2,cores=2)

pm.traceplot(trace)
resu=pm.stats.summary(trace)
``````

However, I think the hypothesis on the likelihood is not very good. Indeed, here I suppose that the data are idd (Independent and identically distributed), but this doesn’t consider that a new observation very close in time to previous data, such as [x=11,y=11+2], will be more expected, and then less informative than a data furthest away such as [x=15,y=15+2].

I don’t really know how I could take that into account, do you have any suggestion ?

Thank you
Victor

Hi Victor,
I’m not sure I understand your question very well (not an expert on timeseries), but here you’ll find PyMC’s timeseries distributions. Maybe a Gaussian random walk would be useful here?
Hope this helps & PyMCheers 