What does zero logp mean?

My model (below) shows a logp of 0, through all the traces. What does that mean? It seems impossible that there is zero likelihood of the observed data, with any of the traced parameters.

The model uses a sum of two binomials—similar to the model of @ggluck—and I took @Nadheesh 's advice to make the sum itself be a normal distribution:

 with pm.Model() as model:
    winnable_p = pm.Uniform('winnable_p', 0, 1) 
    skill = pm.Beta('skill', 2.0, 5.0, shape=sales_professional_count)
    winnable = pm.Binomial('winnable', opportunities, winnable_p, shape=sales_professional_count)
    unwinnable = pm.Deterministic('unwinnable', opportunities - winnable)
    unskilled_perceived_winnability = pm.Uniform('unskilled_perceived_winnability', 0, 1)
    winnable_pursuit_p = pm.Deterministic(
        'winnable_pursuit_p', 
        unskilled_perceived_winnability * (1 - skill) + skill)
    unwinnable_pursuit_p = pm.Deterministic(
        'unwinnable_pursuit_p',
        unskilled_perceived_winnability * (1 - skill))
    winnable_pursued = pm.Binomial('winnable_pursued', winnable, winnable_pursuit_p, shape=sales_professional_count)
    unwinnable_pursued = pm.Binomial('unwinnable_pursued', unwinnable, unwinnable_pursuit_p, shape=sales_professional_count)
    pursuits = pm.Normal('pursuits', winnable_pursued + unwinnable_pursued, sd=1, observed=true_pursuits)
    winnable_won_p = pm.Deterministic('winnable_won_p', skill)
    wins = pm.Binomial('wins', winnable_pursued, winnable_won_p, observed=true_wins) 
    trace = pm.sample(25000, step=pm.NUTS())

But I am seeing model logp as zero, from beginning to end:

What is this newb doing wrong?

Added: maybe pymc3 cannot derive a closed form likelihood function from the sum of two binomials, as @junpenglao suggests in Sum of two Binomial Distributions. Is the Approximate Bayes Computation package mature enough to use? Or is this a bad idea?