`TanhSaturationBaselined` gain and beta

Hi,

I have been using TanhSaturationBaselined recently. The ability to restrict x_0 to a value that is as close as possible to the majority of the spend data has proven effective in decoupling channel saturation and channel effectiveness (beta).

However, it seems to me that its gain parameter is redundant with the beta parameter, since both scale the function linearly. Doesn’t this create identifiability issues with respect to those two parameters, and in consequence instability (as they are simply multiplied by each other)?

Concretely, I am talking about this:

def function(self, x, x0, gain, r, beta):
    """Tanh saturation function."""
    return beta * tanh_saturation_baselined(x, x0, gain, r)

and since

def tanh_saturation_baselined(...):
    return gain * x0 * pt.tanh(x * pt.arctanh(r) / x0) / r

we simply have beta * gain * rest....

I have customized the function, removing the gain parameter and the first few experiments suggests that it does make my models more stable.

So my question is: What’s the purpose of the gain parameter here, apart from its handy intepretation as expected return at x_0? And is my understanding correct that it is fully redundant with beta?

Regars
Jonas