Model segmental linear regression

My bad—that was really vague. By “tightly”, I meant in computation. If you have a joint density p(\theta, \alpha) where \alpha is a discrete parameter, then you can always sample \alpha using the full density p(\alpha \mid \theta) \propto p(\theta, \alpha). This would be easy for us to add to Stan and would give us full generalized Gibbs. But there’s almost always a much much more computationally efficient form of the conditional p(\alpha \mid \theta). For example, in a graphical model, you can compute the (minimal) Markov blanket—that’s what BUGS/JAGS do for generalized Gibbs.