Looping through random variables

Thanks. Yes, that’s what I ended up doing. Instead of looping, I did matrix multiplication (which had the same effect).

My updated model looks like this:

skills = pm.Bernoulli('skills', p=0.5*np.ones((22, 7)), shape=(22,7))

hasAllSkillsForQuestionP = tt.dot(skills, df_skills_per_q_t.T.values)    

hasAllSkillsForQuestion = pm.Bernoulli('hasAllSkillsForQuestion', p=hasAllSkillsForQuestionP, shape=(22,len(df_correct.columns)))

isCorrectQuestion = pm.Bernoulli('isCorrectQuestion', p=pm.math.switch(hasAllSkillsForQuestion, 0.9, 0.2), observed=df_correct, shape=(22, 48))

But now the problem has shifted to inference. When I run inference for 1000 steps, it selects BinaryGibbsMetropolis: [skills, hasAllSkillsForQuestion] and after 1000 steps, when I inspect the trace, it’s really bad. Most posteriors have NaN as n_eff (which I interpret as effective samples to predict the value). Although some have 3 or 4 n_eff.

Do you know what might be happening, and how do I do better inference?

In the book they use message passing algorithms for inference, but I understand that PyMC3 doesn’t implement it.