https://discourse.pymc.io/t/the-number-of-effective-samples-is-smaller-than-25-for-some-parameters
Directly quoting from the above post, a low number of effective samples is usually an indication of strong autocorrelation in the chain. You can plot the trace autocorrelation or look at the trace plot directly to check if the sampler is either getting stuck or not exploring enough. Usually, this could be improved by using a more efficient sampler like NUTS, but here you are already using NUTS. Have you tried varying target_accept to be slightly lower, say .95, and/or increasing the number of tuning/sampling? E.g., does pm.sample(5000,tune=15000,target_accept=0.95) also produce <200 effective samples? If all that fails, could it be that you lack a suitable number of observations for this model? You could try mocking up some fake data so that you can try running the model with more observations, to see if that’s a possible explanation. You can also try using more informed priors. My initial concern looking at this is that your model seems to be running super fast, finishing in only 27 seconds.
Also note that PyMC3 might be too sensitive with their effective sample size warnings. STAN’s warning for effective sample size only shows when the n_eff for a parameter is less than 1% of the sample size.
Sorry I’m not helpful with specific advice, but this is what I’ve come across when trying to diagnose this kind of warning before.