Fatal Python error: Segmentation fault

Hello everyone,
I am struggling with the segmentation error while using pymc4. Any help will be very much appreciated.
I have defined custom likelihood with potential, and I’m trying to determine the posterior probability for my parameters, given priors. I am using variational inference.
The problem is that when I run the model for a small dataset(up to 10 trials) then it works, but once I increase the number of trials, then I get the following error shown on the image.
image

Here is a part of my code:

with pm.Model() as surprise_model:
     #### model priors
    lamb= pm.Bound("lamb", pm.Normal.dist(0.1,0.2), lower=0.0001,upper=0.5) 
    gamma= pm.Bound("gamma", pm.Normal.dist(2,2), lower=1,upper=7)#can be uniform 
    tau= pm.Bound("tau", pm.Normal.dist(2,1), lower=1,upper=4)
    epsilon= pm.Bound("epsilon", pm.Normal.dist(0.1,0.2), lower=0.1,upper=1)

    #### custom log likelihood 
    pm.Potential("likelihood", logp(lamb,gamma,tau,epsilon)) 

    ####variational bayes inference
    advi = pm.ADVI()
    approx = advi.fit(20000)

Just a quick explanation of my logp function. logp takes the priors of the parameters as inputs and outputs the summed log-likelihood over trials.

Can you use triple back-ticks “```” to format your code?

Hi,
Here it is.

with pm.Model() as surprise_model:
# model priors
lamb= pm.Bound(“lamb”, pm.Normal.dist(0.1,0.2), lower=0.0001,upper=0.5)
gamma= pm.Bound(“gamma”, pm.Normal.dist(2,2), lower=1,upper=7)#can be uniform
tau= pm.Bound(“tau”, pm.Normal.dist(2,1), lower=1,upper=4)
epsilon= pm.Bound(“epsilon”, pm.Normal.dist(0.1,0.2), lower=0.1,upper=1)


#### custom log likelihood 
pm.Potential("likelihood", logp(lamb,gamma,tau,epsilon)) 

####variational bayes inference
advi = pm.ADVI()
approx = advi.fit(20000)

Will need to see that logp function. Ideally everything that is needed to reproduce the problem.

Also how did you install pymc, and which version are you using?

Thank you very much for your reply.
I simplified my logp function by using only one parameter - tau. My behavioral data is the sequence of choices participants made. In every trial, I calculate the action probabilities at the correct choice (by using tau, which needs to be estimated) and their log. The output of logp function is the sum of these log probabilities across 10 trials.
calculate_action_probabilities is the function that is called inside of logp function.

These are the commands I used to install pymc4 on Linux.
conda create -c conda-forge -n pymc_env “pymc>=4”
conda activate pymc_env

def calculate_action_probabilities(path_surprise,tau):
    action_probabilities=[]
    base=[np.exp(path_surprise[i]*tau)for i, _ in enumerate(path_surprise)]
    for s in path_surprise:
      action_probabilities.append(np.exp(tau*s)/sum(base))
    return action_probabilities


def logp(tau):
    
 for trial in range(10):
      
##### 2) from the behavioural data, here we get trial information, such as sender path, coal configuration and starting location
      #path that was selected by the sender
      path_beh=data[index]['m'].squeeze()[trial]  #path that was selected by the sender
      sender_path=[]
      [sender_path.append(states[v-1]) for _,v in  enumerate(path_beh.squeeze())]
      goal_conf=data[index]['GC'].squeeze()[trial]
      goal_r = states[goal_conf.squeeze()[1]-1]
      ind_goal_r=sender_path.index(goal_r)

    
      planned_path_phase2 =sender_path[ind_goal_r+1:] 
      states_for_the_phase2=sender_path[ind_goal_r+1:-1] 
          for ind,current_state in enumerate(states_for_the_phase2):
              transition_states=grid.neighbors(current_state)
              transition_states=list(transition_states)
              action_probabilities=calculate_action_probabilities(state_priors,tau) 
              sender_choice=planned_path_phase2[ind+1]
              aprob_at_sender_choice=action_probabilities[transition_states.index(sender_choice)]
              path_ap.append(aprob_at_sender_choice)

              
      trial_LL=np.sum(path_ap)/len(path_ap)
      LL=LL+np.log(trial_LL)   
      
      
      
 return LL
with pm.Model() as surprise_model:
     #### model priors
   
    tau= pm.Bound("tau", pm.Normal.dist(2,1), lower=1,upper=4)
   

    #### custom log likelihood 
    pm.Potential("likelihood", logp(tau)) 

    ####variational bayes inference
    advi = pm.ADVI()
    approx = advi.fit(20000)