How can I penalize if the model sample negative values?

Dear all,
I am now trying to fit a regression model with my own function such as:

def fn(a,b,c,x1, x2):
y = 1/a + a/bx1 + c/ax2
return y

y should always be positive, and all my predictors, x1 and x2 are positive. The parameters, a,b, and c can be negative, and
my observed values (likelihood functions) follow the gamma or wald distributions, but as you know that these distributions cannot have negative values.

To fit this model, I tried to do something like this:

with pm.Model() as model:
a = pm.Normal(‘a’, mu=0, sigma=100)
b = pm.Normal(‘a’, mu=0, sigma=100)
c = pm.Normal(‘a’, mu=0, sigma=100)
mu = fn(a,b,c,x1,x2)
ϵ = pm.HalfCauchy(‘ϵ’, 1)
y_pred = pm.Gamma(‘y_pred’, mu=mu, sigma=ϵ, observed=y_obs)

trace = pm.sample(100000, tune=50000, chains=2, target_accept = 0.999)

With this setting, I got an error because my function, fn, can produce a negative value, but mu cannot be negative in the Gamma likelihood function. In this case, how can I regulaize or penalize the model? or what is a better way to achieve this task?

Thank you for your time and help.

1 Like

The common way is to use a link function that constrains your expectation to be positive, such as exp or softplus.

mu = pm.math.exp(fn(a, b, c, x1, x2))

Alternatively, you can add an arbitrary penalty term with Potential, but NUTS will usually struggle with hard boundaries like that (so not recommended if first approach is fine)

mu = fn(a, b, c, x1, x2)
pm.Potential("negative_penalty", pm.math.switch(mu < 0, -np.inf, 0))

Hi Rcardo,
This is very helpful. I will try and let you know how it goes.
Thanks!

I just tried this alternative method. I believe I have a problem with this method. For example, we can still have the negative mu from this method, and the negative sampled values, mu, cannot be accepted from the alpha parameter in the Gamma distribution.

y_pred = pm.Gamma('dsm_dt_pred', alpha=mu, beta=ϵ, observed=y_observed)

I god the following error:

Name: Log-probability of test_point, dtype: float64

Does this mean we cannot even try to give a negative penalty because we first fail with the test point (negative alpha)?

Please advise.

Yes, you need an initial point that is not -inf. You can set the testval of your variables to make sure the initial point has a positive mu

Thank you! I will try with putting testval=1 arg.

mu = fn(a, b, c, x1, x2)
pm.Potential("negative_penalty", pm.math.switch(mu < 0, -np.inf, 0))

In order to make this work, I set the appropriate testval arg in pm.distribution(arg), and it worked just fine.

Thanks!

2 Likes