Bayesian Generalized Additive Models

Perhaps more of a stats question, basically coming down to “how do you implement penalties on function curvature from a Bayesian perspective?”

I was reading about Generalized Additive Models. They are basically defined as

y_i = \beta_0 + \sum_j s_j(x_{ji}) + \epsilon_i

where s(x) is a smooth function, generated with a weighted sum of basis functions b(x)

s(x) = \sum_{k=1}^K \beta_k b_k(x)

You could get pretty close to a Bayesian GAM pretty simply by setting up some basis functions and priors over the \beta weightings.

But one of the interesting properties about GAMs is that they have a penalty function applied to the curvature of the function, e.g. the second derivative.

So my question is, how would you think about this curvature penalty from a Bayesian perspective? It is easy to think about applying constraints via priors, but I don’t know if that would achieve the same effect of penalising the wiggliness?

My only idea would be to calculate the wiggliness in a PyMC model and use pm.Potential to manually add a constant to the model logo based on the calculated wiggliness. Is that sane? Any other ideas?

EDIT: After more research, looks like you can achieve this by priors on parameters, Generalized additive model - Wikipedia. Would be interested to see if anyone’s implemented this.

2 Likes

I think these two resources can help

3 Likes

I’m late to the party here, but I recommend Chapter 5 from Girosi & King’s Demographic Forecasting, particularly section 5.3 where they talk about how they start with priors on \mu to derive corresponding priors on \beta. I’ve used it previously for forecasting and it works well to penalize models that produce unsmooth outputs (rather than just unsmooth parameters).

[I actually even recreated their YourCast model in PyMC2 back in the day, but it is unfortunately only available on some old BitBucket server locked away in the bowels of my former institution…]

1 Like