To paraphrase comments Aki Vehtari made after his keynote at last year’s PyMCon, the rule of thumb is that divergences are nothing to worry about as long as you have fewer than about one of them. In other words, divergences (any number) should not be ignored.
With a small number of divergences, I typically assume that I can fix things with small modifications to sampling parameters (e.g., targeted acceptance rates), but I always verify this by actually implementing those modifications. More divergences and you may have more of a pathological issue. PyMC3/arviz provides plenty of tools to investigate the divergences so that you can diagnose what all is going on.