Is this an appropriate way to model IRT in PyMC3

Treat the responses to each item as a different observation variable. Is this a proper way to solve the problem
IRT w/ PyMC3, unusual number of parameters modeled?
I changed part of the code, just like this:

import numpy as np
import pymc3 as pm
k = np.array([1,1,1,1,1,1,
          1,1,1,1,0,1,
          1,1,1,0,0,1,
          1,1,0,0,0,1,
          1,0,0,0,0,1]).reshape(5,6)

students = 5
questions = 6
with pm.Model() as model:
    # student multilevel prior
    sigma_student = pm.HalfNormal('sigma_student',sigma=1,shape=1)
    mu_student = pm.Normal('mu_student',mu=0,sigma=1,shape=1)
    z_student = pm.Normal("z_student", mu=mu_student, sigma=sigma_student, shape=students)

    # question single-level prior
    sigma_question = pm.HalfNormal('sigma_question',sigma=1,shape=1)
    z_question = pm.Normal("z_question",mu=0, sigma=sigma_question, shape=questions)
 
    # Likelihood
    for question in range(questions):
        p = pm.Deterministic("p{}".format(question),pm.math.sigmoid(z_student-z_question[question]))
        kij = pm.Bernoulli("kij{}".format(question), p=p, observed=k[:,question])
    trace = pm.sample(chains=4)

By the way, I would like to ask if there is any container that can be sliced in pymc3.

This model looks really good! For containers, just use Pandas dataframes, numpy arrays, or anything else you would normally use. My only suggestion is to avoid half-normal priors. Gamma, log-normal, and similarly-shaped priors are more realistic because they let you rule out extremely small standard deviations – I can confidently say that the probability that all human beings are equally good at this test is 0%, so your prior should rule that out. The half-normal also has extremely thin tails. Setting more informative priors could help you get better results here.

All of the above is assuming the real sample size you plan to use is about as large as the sample size you gave in this example. If you plan to use a larger sample, say >100, the priors are going to matter a lot less than the very thin tails on your normal likelihood, which you should probably check.

Thanks for your advice. It’s very helpful.
About the container, I want to manipulate it like this(like in stan or JAGS):

x = [1,2,3] # list type as a container
x[0] = 10 # change the element of list

I can’t do this in pymc3. Is there another way to do this in pymc3?