You need to be careful with divergences, the first thing to check is if they are clustering in some area of the parameter space. Using ArviZ we can get a pairs plot which will easily highlight the divergent parameter sets.

Here you can see they are mostly spread out, so the next step would be to adjust target_accept in pm.sample()
trace = pm.sample(target_accept=0.95)
The default value is 0.8, if PyMC gives you that warning then typical values to try are 0.90, 0.95 and 0.99.
Here is a very good resource I find myself looking at often for many different modelling issues
If you want to get deeper into the issues with divergences and how they impact modelling then there is a very indepth blog written by the Stan community, a similar C based bayesian probabalistic language
https://betanalpha.github.io/assets/case_studies/divergences_and_bias.html