Hi There,

right now I am trying to get a Gaussian Process for interpolation and forcasting to work. The situation where I am struggling (maybe a knot in my head) is how to model the following problem. I hope that someone might be able to unwind the knot.

I have a set measurements over time for multiple divices under test (duts) where a set of J duts where measured a the same condition S_i conditions and the output y_ij was tracked over time. The time period T was dependent on the condition S_i thus that t in T_i and the time delta between two measurement times is not constant. Thus the mathematical fomulation of the problem should be somewhat like y_ij(t in T_i, S_i) ~ GP(m(t_i, S_i), k(t_i, S_i)) with m(t_i, S_i) = 0 and k(t_i, S_i) = k1(t_i, t_i*) + k2(S_i,S_i*). What I want to accomplish is prodict an outcome y_new for a time t_new and condition S_new.

Is this the right approach? If so, I am not shure how to set up the correct model for this problem. If someone may give me a hint I would be very happy

Best, riv

It might help to “stack” your problem into one big tall `y`

vector, instead of casting it as `y_ij`

.

I think then your data `y`

would have a shape of (n x 1) and look like

```
y = (
(dut_1, T_1),
(dut_1, T_2),
...
(dut_1, T_n),
(dut_2, T_1),
(dut_2, T_2),
...
(dut_2, T_n),
...
)
```

Your `X`

variable would be set up similarly, (n x 2). Then you have a 2d GP with inputs time and condition. Depending on how many data points you have this could be slow. If it’s possible for you to use a covariance `k(t_i, S_i) = k1(t_i, t_i*) * k2(S_i,S_i*)`

(multiply instead of add), then it’s separable and you can use the Kronecker implementation, which should greatly speed things up for you. Please let me know if I didn’t quite understand your question!

Thanks for the reply, so first I guess multiplying the covariance functions might also be possible The Kronecker Structured Covariance Matrix was probably the hint I needed. So in my case the gp.MarginalKron should hopefully work. Thank you very much!

Regdarding the stacking do I understand it rightly that you mean .eg. for y[0] = ( [y_dut1(t_0),…,y_dut1(t_n)] , t in T_1)? Then my question would be why the time steps are part of observations?

Maybe to clarify the aim here. I try to model a degrdation process under accelerated lifetime testing. At each stress condition S_i a set of J slightly different timesieres were recorded, let’s call them y_j = f(t,S_i) = exp(alpha_i * t). Here alpha_i is the decay rate. For each S_i alpha can be assumed as N(µ_i, sigma_i) (intersample variations) and with increased stress S_i-1 < S_i also follows µ_i-1, sigma_i-1 < µ_i, sigma_i (hingher stress means faster decay).

In this regard I have two follow up questions. So in the final application there will be about 5000 observations with a test structure of 4 different condition types (t, S1, S2, S3) yielding a y shape of (5000 x 1) and X with (5000x4). This would result in a covariance matrix in the dimensionality region of somewhat (1e15 x 1e15). Would it be a suitable approach (however that my work) to first model an individual GP for each f_j(t) at a given set of (S1,S2,S3) and finally model a GP of those individual GPs. This would drastically reduce the dimensionality of the covariance matrix and thus spead up the process. I hope you get my idea

Thanks already for the input!

RE your stacking Q, what I meant was the definition of the vec operator here at the top where it says vec(A): Vectorization (mathematics) - Wikipedia

In this regard I have two follow up questions. So in the final application there will be about 5000 observations with a test structure of 4 different condition types (t, S1, S2, S3) yielding a y shape of (5000 x 1) and X with (5000x4). This would result in a covariance matrix in the dimensionality region of somewhat (1e15 x 1e15). Would it be a suitable approach (however that my work) to first model an individual GP for each f_j(t) at a given set of (S1,S2,S3) and finally model a GP of those individual GPs. This would drastically reduce the dimensionality of the covariance matrix and thus spead up the process.

As annoying as it is to answer a question with a question, I think it depends on whether there are any relationships between the 4 condition types. Is there covariance between the S_i, or can you treat each time series as an independent draw from the same GP, like: S_i ~ GP(0, K(t, t’)).

Depends on your computer, but with that dataset size, you may want to look into approximations too, or other time series models. MAP estimation may work fine, but NUTS will start to be a bit slow…