Hello,

I am trying to predict football scores (thrilling, I know ) for my studies.

There is quite a bit of literature regarding this field and everyone is using Poisson regression (dependent or independent) to predict the goals for the home and the away team and calculating the results from this distributions. So far, so easy.

My problem right now is that the model doesnât predict any draws, meaning P(goals\_home == goals\_away) is never higher than P(goals\_home > goals\_away) or P(goals\_home < goals\_away). This problem isnât unique to my model, but it seems more distinct, as in roughly 600 predicted test matches to model produces 0 draw predictions.

There are a few proposed solutions, but most of the authors seem to agree that the modeling of the goals via Poisson is the solution (sadly not for me ). Another proposed solution is the modeling via a Skellam distribution (Poisson difference distribution) and to tackle the few draws, use a zero inflated Skellam distribution. This sounds logically to me, unfortunately Pymc doesnât offer the Skellam distribution and I doubt I am capable to implement it myself (If even possible).

This is my model right now, please feel free to criticize everything, even if it is not directly correlated to the draw problem at hand.

```
with pm.Model() as independent_poisson:
pm_features = pm.Data("pm_features", features, mutable=True)
pm_gi = pm.Data("pm_gi", gi, mutable=True)
pm_form = pm.Data("pm_form", form, mutable=True)
pm_goals = pm.Data("pm_goals", goals, mutable=True)
pm_elo = pm.Data("pm_elo", elo, mutable=True)
coefs_features = pm.Normal('coefs_features', mu=[[0.5, 0], [0.5, 0], [0, 0.5], [0, 0.5]],
sigma=[[2, 0.001], [2, 0.001], [0.001, 2], [0.001, 2]],
shape=(features.shape[1], 2))
coefs_elo = pm.Normal('coefs_elo', mu=[1,1], sigma=[0.5, 0.5], shape=(1,2))
coefs_form = pm.Normal('coefs_form', mu=[[1, -0.5], [-0.5, 1]],
sigma=[[0.5, 0.05],[0.05, 0.5]], shape=(form.shape[1],2))
factor = pm.Dirichlet("factor", a=np.ones(3))
intercepts = pm.Normal("intercepts", shape=2)
log_lam = pm.Deterministic("log_lam", intercepts + factor[0]*(pm_elo @ coefs_elo) +
factor[1]*(pm_form @ coefs_form) +
factor[2]*(pm_features @ coefs_features))
lam = pm.math.exp(log_lam)
obs = pm.Poisson("obs", mu=lam, observed=pm_goals)
```

I also tried a dependent Poisson distribution by using pm.MvNormal but without more success.

So my general question might be: Is there a way to produce thetas that are closer together (maybe dont use exp?), or other ways, like the zero inflated Skellam, to produce more draws?

Thank you very much in advance for your help!