I am trying to infer some physical properties using experimental data.
Let say, inverse inference of three parameters using experimental data which were measured under at equal time intervals.
Thus, basically the measured data that has a shape of row or column vector, is a time series.
In this case, if I don’t consider the autocorrelation among data points, posteriors are highly dependent on the time interval of data I used for the likelihood evaluation.
Actually, the posteriors were quite odd when I used a very fine time interval with a Normal likelihood.
In this case, what is the best way to address this issue?
My intuition is that using a multivariate Normal distribution
MvNormal for the likelihood function and specify an arbitrary covariance matrix that describes the time lag.
I am not sure whether this is the best option though.
I assumed an exponential decrease of the autocorrelation.
with pm.Model() as model: # Priors param_A = pm.Normal('A', mu=5, sd=0.5) param_B = pm.Normal('B', mu=0.5, sd=0.05) param_C = pm.Uniform('C', lower=2.0, upper=4.0) # Expected value of model outcome, using prediction model prediction = forward_model(input_A, input_B, input_C, input_D, measured_time) # covariance matrix for considering autocorrelation from scipy.linalg import toeplitz exponents = np.linspace(0, 30, len(measured_time)) elements = 1/np.exp(exponents) first_row = elements first_col = elements covar = toeplitz(first_col, first_row) # autocorrelation considered likelihood Y_obs = pm.MvNormal('Y_obs', mu=prediction, cov=covar, observed=measured_data)
Any valuable suggestion for elegantly solve this problem?
Thank you in advance!