I created my own MMM and validated it against the PYMC MMM, by comparing my outputs with the MMM Example Notebook - results are only similar because of the random number generator.

It worked well but for the beta and lam parameter for the second channel. These deviated a lot between my model and the PYMC MMM Example Notebook. After some testing i came up with the conclusion, that the model used in the example suffers from non-identifiable beta / lam:

The PYMC MMM Example Notebook parameters for the beta contributed saturated adstock are:

```
beta = 3
alpha = 0.2
lam = 3
```

However, you get a very similar solution for:

```
beta = 12.5
alpha = 0.2
lam = 0.5
```

I assume mainly, because the decay is just not relevant for the second channel (its not spread out enough to provide a good learning signal)

When reproduce the beta contributed saturated adstock with both parameter sets on the example notebook data (interval 25:50), it looks like this:

```
beta_true = 3
alpha_true = 0.2
lam_true = 3
beta_false = 12.5
alpha_false = 0.2
lam_false = 0.5
adstocked_true = geometric_adstock(x=df["x2"].to_numpy(), alpha=alpha_true, l_max=8, normalize=True).eval().flatten()
channel_true = beta_true * df['y'].max()* logistic_saturation(x=adstocked_true, lam=lam_true).eval()
plt.plot(range(0,25), channel_true[25:50])
adstocked_false= geometric_adstock(x=df["x2"].to_numpy(), alpha=alpha_false, l_max=8, normalize=True).eval().flatten()
channel_false = beta_false * df['y'].max()* logistic_saturation(x=adstocked_false, lam=lam_false).eval()
plt.plot(range(0,25), channel_false[25:50])
```

**How can one get more confident about a model, that could fool you into quadrupling a channel contribution?**