Hi All,
Throwing this out there with the potential for embarrassing myself with a lack of understanding the sampling process, but I have a strange bug I can’t seem to solve. It’s a long one.
I’m building a Media Mix Model similar to many others. For those unfamiliar, it is big linear regression model where the explanatory variables are first adjusted by two functions before being multiplied by the coefficient.
By sampling, I am hoping to find the most likely values for each parameter of the functions in order to recreate the simple regression model and predict on new data.
The ultimate Problem: When I use the posterior means as parameters to recreate one of the curves, the outcome is strangely lower than the posterior prediction of the outcome (target). The parameters for the other curve and the main regression coefficients work perfectly and calculate to the same value as the posterior prediction.
The code
Versions:
PyMC = 4.0.0b5 // Aesara = 2.5.1 // Arviz = 0.12.0
The data is Marketing spend (or impressions) on a weekly basis for each channel (tv, radio, print). These values are transformed by two functions before being added to the regression
The first function is an Adstock decay curve. This allows the model to realize impact of marketing slightly after the exposure happened:
def weibull_adstock_at(x_t, alpha:float, theta:int=0, L=12, normalize=True):
"""
Builds the Weibull adstock decay curve and applies it across the array
"""
w = (alpha**((at.ones(L).cumsum()-1)-theta)**2)
xx = at.stack([at.concatenate([at.zeros(i), x_t[:x_t.shape[0] - i]]) for i in range(L)])
if not normalize:
y = at.dot(w, xx)
else:
y = at.dot(w / at.sum(w), xx)
return y
The next is a saturation curve (spoiler alert: this is my problem curve). I have used the Hill function from this widely cited paper.
def saturation_hill_pymc(x, alpha, gamma):
x_s_hill = x**alpha / (x**alpha + gamma**alpha)
return x_s_hill
I build a model as such:
response_mean = []
with pm.Model(rng_seeder=42) as model_2:
for channel_name in delay_channels:
x = data_transformed[channel_name].values
ads_alpha = pm.Beta(f"{channel_name}_ads_alpha", 3, 3)
ads_theta = pm.Gamma(f"{channel_name}_ads_theta", 2, 1)
sat_gamma = pm.Beta(f"{channel_name}_sat_gamma", 3, 3)
sat_alpha = pm.Gamma(f"{channel_name}_sat_alpha", 2, 1)
x_new = weibull_adstock_at(x, alpha=ads_alpha, theta=ads_theta)
saturation_tensor = hill_saturation_build(x_new, alpha=sat_alpha, gamma=sat_gamma)
channel_b = pm.HalfNormal(f"{channel_name}_media_coef", sigma=1)
response_mean.append(saturation_tensor * channel_b)
for control_var in control_variables:
x = data_transformed[control_var].values
control_beta = pm.Normal(f"{control_var}_control_coef")
control_x = control_beta * x
response_mean.append(control_x)
intercept = pm.HalfNormal("intercept", 0.05)
sigma = pm.HalfNormal("sigma", 0.05)
likelihood = pm.Normal("outcome",
mu=intercept + sum(response_mean),
sigma=sigma,
observed=data_transformed[target].values)
idata = pm.sample(2000, tune=2000, return_inferencedata=True)
Then sample a posterior predictive
with model_2:
idata.extend(pm.sample_posterior_predictive(idata))
Things are going OK so far.
The next step the the project is to attempt to pull out the parameters for the model and recreate the basic equation.
Initially I started with create a summary in arviz like so:
trace_summary = az.summary(idata)
This gave me a mean for each parameter that I could use to recreate the equations like so:
adstock_theta = trace_summary['mean'][f"{channel}_ads_theta"]
adstock_alpha = trace_summary['mean'][f"{channel}_ads_alpha"]
saturation_alpha = trace_summary['mean'][f"{channel}_sat_alpha"]
saturation_gamma = trace_summary['mean'][f"{channel}_sat_gamma"]
coef = trace_summary['mean'][f"{channel}_coef"]
data_transformed_decomposed[delay_channel] = (hill_saturation_build(
weibull_adstock_at(
data_transformed_[delay_channel].values,
alpha=adstock_alpha,
theta=adstock_theta,
),
alpha=saturation_alpha,
gamma=saturation_gamma,
) * coef).eval()
Sum up the outcomes of all the channels, add the intercept and we’re having an old school statistics party.
However…
When I feed all of the previous data into this equation and compare the results to the Posterior Predictive outcome, the numbers are suspiciously consistently lower.
Debug mode
-
At first I considered that I might have left out a variable in the follow up calculation. After a triple check it seemed correct,
-
Next, I thought perhaps the values in
idata
might be different ever so slightly from what I was initially using in trace summary. So, I went back to recheck different methods of finding the mean of each distribution across chains and draws. I replaced the parameters to the non-rounded values. No change. -
Then I thought perhaps just one feature in the data was acting up in response to the functions applied. So, I added
pm.Deterministic()
to the model to store the resulting contributions of each channel for each week. I could then look at these compared to the calculated values each week before summing up to a final regression predictions. Thepm.Deterministic()
numbers were a mostly higher each week (row) across most channels (columns). This wasn’t some outlier but every row and column being slightly off. Although some channels were higher than others. HOWEVER!!, Thecontrol_channels
which did no have any of the functions applied were spot on with thepm.Deterministic
numbers
channel_contrib = pm.Deterministic(
f'contribution_{channel_name}',
hill_saturation_build(
weibull_adstock_at(x,
alpha=ads_alpha,
theta=ads_theta),
alpha=sat_alpha,
gamma=sat_gamma
) * channel_b)
-
So, I now know only the channels with functions applied are calculating differently from the
posterior_predictive
. Next step was to isolate each function to see how it performed. The first step I eliminated thehill_saturation_build
function from the model and the calculation. Sampling the data only using theweibull_adstock_decay
function and the coefficents. Jackpot. The calculated numbers were nearly exactly the same as the posterior sample. Then I ran the model again using only thehill_saturation_build
function and the calculated numbers were once again consistently off. It’s the saturation function doing the damage. -
I tried feeding the
hill_saturation_build
function small samples data of both numpy arrays and aesara tensors to see if it produced different results. No news. Same results with either array or tensor. -
I built a visual to help see where the calculated numbers differed from the
pm.Deterministic()
numbers by channel. Comparing the same weeks prediticted contribution to the overall against the calculated contribution using the same function. I then added a line with the difference…
Finally getting somewhere and to the point of this post!
I can see from the chart that when the value of the individual feature increases the difference between the calculated feild and the posterior decreases. Looking at the visuals I can clearly see they are related.
So, my hill function is working differently inside the sampling process than it works outside in a calculation?
- The Hill function used a
half-life
parameter that assumed all values were between 0-1 (I min-max scaled all features), But perhaps after an adstock decay some features were above 1? So, borrowing from another sample code, I added a line of code to determine the range of values to transform first, then adujsted myhalf-life
parameter to be the same inside that new scale.
def hill_saturation_build(x: np.array, alpha: float, gamma: float) -> np.array:
"""
Saturation function. Does not need to be scaled to 0-1.
"""
gammatransformed = (at.max(x) - at.min(x)) * gamma
sat_curve = x**alpha / (x**alpha + gammatransformed**alpha)
return sat_curve
This helped a touch, but still suspiciously off.
I tested the Saturation Hill function again on a range of very small values and large ones plotting the results and found the curve to remain the same across all values.
So, you made it this far… you’re a hero. Is there something in the sampling that would make this Hill Function work differently inside the model? Am I completely missing the point on how sampling works? Is there some obviously easy solution am missing?
Huge thanks in advance to everyone in the pymc community.