Unfortunately, seems like this question went unanswered. And also unfortunately there is a couple things missing that makes the code unrunnable so I had to fill them with my best guess. After adding
R_idx = tasks_syn_idx
(or maybe R_arr_syn is a better choice looking at the data generating code) and replacing R_codes.size with 1000 the code runs without complaints. Since sample is not involved in the code, I also don’t know what sample size the op has tried but when I do
pm.sample(1000, tune=1000, chains=6, cores=6)
It starts sort of slow (around 30 mins) and then picks up pace and seemingly it will finish around 10-15 mins. It does have divergences however so perhaps I am not using what the op has intended to use for R_idx. If instead R_idx was a discrete set of parameters, that could be one source of the slowing (mixing discrete and continuous parameters). Also about maybe 3-4 months ago a fix was implemented that sometimes caused slow downs in MvNormal so that could be another reason. If you have a fully working code, perhaps you can open another topic and ask again.
ps: Looking at the data generating code more carefully perhaps it was not correct to set R_codes.size with 1000 since that determines the number of rows of beta and the equivalent object in data generation seemingly is beta_syn which has shape 2x3. With this the code finishes in 20 seconds, however there are still a lot of divergences even with high target accept. The posteriors for beta do seem to get the true parameters but the posterior predictive plot (and divergences) suggests something is wrong. So apart from some remodelling/checking the model, now the code above runs quite fast.