I am new to pymc, but with a lot of experience in Bayesian methods and sampling in cosmology.
In one project, we want to apply a very specific prior to some set of parameters, but we also want to sample a different set – this is the same as the known transformations but one that we would like to apply by hand. In this case, we know the Jacobian, so all we need to do is add this to the log-likelihood. The docs are slightly unclear about this case. Should we do this by pymc.model.core.Potential?
For the aficionados, the specific problem is that we want to solve is the following:
- Parameters: the entries of the covariance matrix of a set of multivariate Gaussian random variables.
- Prior: uniform on the entries Σᵢⱼ, constrained so that the matrix remains positive definite (yes, this really is what we want – not negotiable).
- Transformation: We take the Cholesky decompostion L of Σ and sample (a) the off-diagonal entries of L and (b) the log of the diagonal entries of L (since Lᵢᵢ>0). This transformation makes it easy to implement the positive-definiteness constraint, since it corresponds to an unconstrained prior on (a) and (b)
- We know the Jacobian of this transformation and can apply it directly. This is the part that needs the extra term.
Alternately, we could possibly sample different parameters, e.g., using the LKJ prior on the correlation (not covariance) matrix along with a simple half-uniform prior on the univariate variances, but we would still need to add a jacobian determinant (which would be straightforward once I understand the machinery).