Welcome!
So first things first. You should ignore all aspects of this result because of the (large number of) presence of divergences (visually indicated by the black tick marks on the x-axis, but pymc3 also would have complained about them at the terminal). Once you see those, the details of the posterior can’t really be interpreted.
Two quick observations about your model. First, your priors over the alpha and beta parameters imply extremely precise/strong beliefs about the value of p_B
. Depending on how much data you have available and your specific needs, this may or may not be desirable. Second, uniform priors often cause sampling difficulties because of the bounds that they create at each end of the interval.
Without knowing exactly what your data looks like, my suggestion would be to weaken the priors and probably re-specify the mode so that your parameters characterize the mean and SD of the beta distribution rather than alpha and beta. Because beta distributions have support over [0,1] an uninformative prior over the mean is somewhat easier to select. For the SD, you can choose a prior with support over the positive reals and build in as much confidence as you wish. Something like a gamma prior is fairly conventional for priors on scale parameters in hierarchical models. Here is a sketch of something that may be useful (beware, extremely untested):
with pm.Model() as comparing_days:
mu = pm.Uniform('mu', upper=0, lower=1)
sd = pm.Gamma('sd', alpha=1, beta=10)
p_B = pm.Beta('p_B', mu=mu, sigma=sd)
obs = pm.Binomial('obs', n=df_cr.trials,
p=p_B, observed=df_cr.success)