- Are you comparing similar periods?
- Are you checking the lag effect of each channel?
- What version are you using?
- Are you working with
mmm.multidimensionalAPI or regular MMM api?
Without any of those, I can’t properly give you a better input. But looking at the saturation functions only is not enough to make the statement that “optimizer significantly under spending my most efficient channel“.
You can have channel with a long adstock effect, meaning their compose effect overtime can be better than your more “linear“ direct channel (if the lag effect for this one is low). On top of it, the optimizer distribute spend evenly across channels, if this assumption is different to your historical data (you never spend evenly every day) then results can change.
Note: We have a parameter which allow you to use a spend distribution over time similar to your historical spend, this could be a use case for it. On the other hand, you can simply sample different response distributions at different spend levels on the allocation strategy to build the precise curve use by the optimizer given a spend of X over N period of time, that response will consider the full function, will not be a only saturation parameter representation.
The best you can do to assess correctly, will be to take your df for each channel calculate the avg spend per channel, build a allocation_strategy Xarray with the information, and then set num_optimization_periods to len(df). Doing so, you then run sample_response_distribution and check if the response is higher (if you are using the objective to maximize the mean posterior, the response from this should be lower).
If that happens, then would be amazing if you can share a reproducible example!
Hope this brings light ![]()