Log-likelihood is a function of a matrix A.

I want to use NUTS to sample A and constraint A such that each row of A sums to 1.

I am using a custom theano operation to define the log-likelihood.

Can I just replace the log-likelihood by the Lagrangian, using the Lagrange multiplier method?

Do I have to somehow reformulate the Lagrangian into a Hamiltonian?

How does pymc’s Dirichlet prior implement this kind of constraint? Does it add a term to the log-likelihood to replace the constrained optimization by an unconstrained optimization?

Thanks again.