Hi all,

I am trying to use a GP + softmax as a transition model for hidden states:

```
with pm.Model() as model:
### Prior init state and emission
p_init_states = pm.Dirichlet("p_init_s", a=[3,2,1])
# GP state 1:
l1 = pm.Gamma('l1', alpha=2, beta=1, shape=n_input_s)
eta1 = pm.HalfCauchy('eta1', beta=5)
cov1 = eta1 ** 2 * pm.gp.cov.ExpQuad(n_input_s, l1)
gp1 = pm.gp.Latent(cov_func=cov1)
# GP state 2:
l2 = pm.Gamma('l2', alpha=2, beta=1, shape=n_input_s)
eta2 = pm.HalfCauchy('eta2', beta=5)
cov2 = eta2 ** 2 * pm.gp.cov.ExpQuad(n_input_s, l2)
gp2 = pm.gp.Latent(cov_func=cov2)
# GP state 3:
l3 = pm.Gamma('l3', alpha=2, beta=1, shape=n_input_s)
eta3 = pm.HalfCauchy('eta3', beta=5)
cov3 = eta3 ** 2 * pm.gp.cov.ExpQuad(n_input_s, l3)
gp3 = pm.gp.Latent(cov_func=cov3)
### LL init state and emission
init_states = pm.Categorical("s_0", p=p_init_states, shape=len(train_ids))
all_states = pt.set_subtensor(all_states[:, 0], init_states)
all_states_variables_list.append(init_states)
for t in range(1, n_timesteps):
# States transition
input_state_t = pt.stack([pt.as_tensor(train_ids.ravel()), all_states[:,t-1], pt.as_tensor(action_ts_train.iloc[:, t-1].tolist())], axis=1)
f1 = gp1.prior(f"f1_{t}", input_state_t)
f2 = gp2.prior(f"f2_{t}", input_state_t)
f3 = gp3.prior(f"f3_{t}", input_state_t)
sm = pm.math.softmax(pt.stack([f1, f2, f3], axis=1), axis=1)
s = pm.Categorical(f's_{t}', sm)
all_states = pt.set_subtensor(all_states[:, t], s)
all_states_variables_list.append(s)
## ... Emissions process and observations of true emissions...
```

Now, looking into this topic, I got to understand better how GP latent works and noticed that the way it’s implemented overwrites the given_vals.

Therefore, I am now very uncertain of the quality of this model as I cannot condition on the whole time series observed data.

Did I understand correctly that the current implementation is wrong as it resets the given_vals at every timesteps? Are you then aware of a better implementation using GP?

Many thanks for your help