Help translating from JAGS to PyMC3

I’m new to PyMC3 and have previously been using JAGS. To convert over to PyMC3 I wanted to convert a previous JAGS script of mine, but have been having many difficulties… here is a simplified diagram of the hierarchical model I want.

The 50 refers to 50 people, and the 14,000 refers to the total number of experimental trials (summing across people).

Here is my working JAGS model:

  for(i in ntrials)   { # for every trial...
      x[i] ~ dvonmises(mn[i],60) # x is the observed data, trying to approximate it using VonMises distribution
      mn[i] <- ifelse(z[i]==1, m[i,1]+mushift[subj[i]], m[i,2])      
	  z[i] ~ dbern(pt[subj[i]])

  for (j in nsubj)  { # for every subject...
  	  pt[j] ~ dbeta(CmA, CmB)
      mushift[j] ~ dnorm(Cshift,Ctau)

  # Hyper-priors
  CmA ~ dexp(.1)
  CmB ~ dexp(.1)

  Cshift ~ dnorm(0,.5)
  Ctau ~ dgamma(2,.01)

The subj variable is a vector of length 14,000 that refers to which of the 50 people a certain trial came from (e.g., subj[1000] = 42 means the 1000th cell of vector subj came from person #42).

The difficulty I have is with the trial-based for loop at the top. I want the mu parameter of the VonMises distribution to change depending on whether z is 0 or 1. Here is my attempt so far to code it using PyMC3:

with pm.Model() as hierarchical_model:
    # Hyper-priors
    Cshift = pm.Normal('Cshift',0,.5)
    Ctau = pm.Gamma('Ctau', 2, .01)

    CmA = pm.Exponential('CmA',.1)
    CmB = pm.Exponential('CmB',.1)

    # Subject-level
    pt = pm.Beta('pt', alpha=CmA, beta=CmB, shape=nsubj)
    mushift = pm.Normal('mushift',mu=Cshift,sd=Ctau,shape=nsubj)

    # Trial-level
#     z = pm.Bernoulli('z', pt, shape=ntrials)
#     mn = pm.math.switch(z, m[i,0]+mushift[subj[i]], m[i,1])
    y = pm.VonMises('y', mu=mn, kappa=60, observed=x)   

Lines are commented out where I’ve been facing problems. Can PyMC3 support this sort of hierarchical model? Does anyone have any advice or resources to point me towards? Thanks very much for any help.

You can get around the for loop by just using subj as an index:

z = pm.Bernoulli('z', pt, shape=ntrials)
mn = pm.math.switch(z, m[:, 0] + mushift[subj], m[:, 1]

The unobserved bernoulli is not such a great idea however, because you can’t use just nuts if you have unobserved discrete parameters. The samplers will still do something, but in many cases it will not give you good posterior samples. You can avoid that by marginalizing out z:

subj = [0, 0, 1, 1]
nsubj = 2
m = np.array([[1., 2]] * 4)

with pm.Model():
    Cshift = pm.Normal('Cshift', 0, .5)
    Ctau = pm.Gamma('Ctau', 2, .01)

    CmA = pm.Exponential('CmA', .1)
    CmB = pm.Exponential('CmB', .1)

    # Subject-level
    pt = pm.Beta('pt', alpha=CmA, beta=CmB, shape=nsubj)
    mushift = pm.Normal('mushift',mu=Cshift,sd=Ctau,shape=nsubj)
    dists = [
        pm.VonMises.dist(mu=m[:, 0] + mushift[subj], kappa=np.deg2rad(60)),
        pm.VonMises.dist(mu=m[:, 1], kappa=np.deg2rad(60)),
    w = tt.stack([pt[subj], 1 - pt[subj]])
    pm.Mixture('y', w.T, dists, observed=np.deg2rad([60, 70, 80, 90]))
    trace = pm.sample()
1 Like

Thanks so much for the help. The marginalization of z to use nuts is super helpful!

Unfortunately regarding the subj index to get around the for loop – my variable m is a matrix of 14,000 x 2, not 50 x 2. In other words, each trial (out of 14,000 trials) has a different value for m[trial,1]. It’s not constant across all trials for each subject (although m[:,0] is a constant value, if that matters). So if I use the code as you adjusted it, I get a shape problem: “Input dimension mis-match. (input[0].shape[0] = 14000, input[1].shape[0] = 50)”. Do you know if there a workaround for this?

Can you post example data for the variables? I don’t get it yet. :slight_smile:

1 Like

I think I realized on my own what the problem was. I just needed to do some data transformations before inputting them to the model. I think I can make things work now with some extra time. Thanks so much for the guidance!

1 Like