Add linear constraint with Lagrange multiplier method

#1

Log-likelihood is a function of a matrix A.
I want to use NUTS to sample A and constraint A such that each row of A sums to 1.
I am using a custom theano operation to define the log-likelihood.

Can I just replace the log-likelihood by the Lagrangian, using the Lagrange multiplier method?

Do I have to somehow reformulate the Lagrangian into a Hamiltonian?

How does pymc’s Dirichlet prior implement this kind of constraint? Does it add a term to the log-likelihood to replace the constrained optimization by an unconstrained optimization?

Thanks again.

Sampling from simplex with inequality constraints
#2

Probably I can add the \lambda^{T}c(q) term to the Hamiltonian \mathcal{H}(p, q) with the pm.Pontential class if I solve the Lagrange multipliers \lambda^{T} for the constraint c(q) = 0 by hand.

#3

Having done this before, I can attest that the usage of Lagrange multipliers for constraint is much more effective for closed-form derivations than it is for numerical solutions. It is almost always better to use a transformation of some sort.

This is especially true when the constraint is binding, for instance in a linear model if you are constraining a relationship between terms that are likely independent. In these cases even the pm.Potential approach can fail.

In your case, I would write your matrix A as [X | b] where X is the random part, and b_i = 1 - \sum X[i,:]. Automatic differentiation will take care of the rest for you.