I have a hierarchical logistic regression with some missing data. Currently, I’m trying to just use the normal :

> with maskingModel:

`trace = pm.sample(60000, chains=6, cores=6, tune=30000, nuts={'target_accept':.99}, init="jitter+advi+adapt_diag")`

This returns the following:

> >NUTS: [Local Bias in Estimates, Campaign effects, Time effects, Constants, Local Bias Caused by Campaign, Var(Bias from Campaign), Bias from Campaign, Var(Est), Bias of Est, Effect Variance, Average Campaign Effect, Var(Time Effect), Average Time Effect, Var, Mean]

`>Metropolis: [Counts_missing] Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 14 seconds. There were 50 divergences after tuning. Increase `target_accept` or reparameterize. There were 65 divergences after tuning. Increase `target_accept` or reparameterize. There were 24 divergences after tuning. Increase `target_accept` or reparameterize. There were 96 divergences after tuning. Increase `target_accept` or reparameterize. The rhat statistic is larger than 1.05 for some parameters. This indicates slight problems during sampling.`

So it looks like the data imputation didn’t go well. How can I change the number of samples and get rid of the divergences? Are there samplers other than Metropolis that can be used here? I think particle Gibbs should be doable here, but I don’t know whether/how PyMC3 implements this.