Yes, I am happy to share my code. I considered this for hours. The only reason I am not to do this is that it may be an impolite behavior that shares Stan code in PyMC3 community.
Here is my code:
data {
int<lower=0> N; // trial number
int y[N];
}
parameters {
real k;
real<lower=0.1, upper=0.9> r[N];
real v[N];
}
model {
k~uniform(-10,10);
for(t in 1:N){
if(t == 1){
v[t] ~ uniform(0,100);
r[t] ~ normal(0.5,0.3);
}
else{
v[t] ~ normal(v[t-1],exp(k));
r[t] ~ beta_proportion(r[t-1],exp(v[t]));
}
y[t] ~ bernoulli(r[t]);
}
}
1 Like
Thank you for the information. I will try it.
Actually, I have tried a bunch of HMM packages, but they are still hard to implement the Behrens’ model
That’s not a thing to worry about. There is actually a lot of cross-talk between PyMC3 and STAN (mostly we stealing ideas from them)
1 Like
Only now did I realize what the article model is about.
It’s modelling the change in Bayesian belief from trial to trial, assuming some latent dynamics that are being learned online. It’s not a conventional HMM. That’s why things move slowly (with some bursts of change) in those plots from the optlearner
class.
That’s also why k
is allowed to vary over trials, since it is the posterior belief at each trial that is being modeled and not k
itself which would otherwise be a fixed parameter over the course of the experiment.
To obtain equivalent results one would need to plot the posterior for the model that is fit with only the first trial, then the first two trials, first three trials, up to the last one, which includes all the trials. Or marginalize over these updates as is done in the paper and the optlearner
class.
1 Like
I think you are right.
And I understand why PyMC3 (or Stan) cannot implement this model well. I expected those program will draw the whole distribution between trials originally. But the truth is that they only sample some value from the given distribution which PyMC3 cannot update dynamically.