Multivariate Laplace Prior

The bottom of the wikipedia article gives you everything you need to write a generative graph. If you have a generative graph, you can use a pm.CustomDist to easily implement the distribution. In the background, PyMC will automatically derive a logp for the distribution:

import pymc as pm
import pytensor.tensor as pt

def multivariate_laplace(mu, chol, size=None):    
    Y = pm.MvNormal.dist(mu=0, chol=chol, size=size)
    broadcast_size = pt.broadcast_shape(mu, Y)
    W = pm.Exponential.dist(lam=1, size=broadcast_size)

    return pt.sqrt(W) * Y + mu

Which you can use as follows:

# I am going to use a single known covariance to check we can recover it,
# in reality you would use a normal random variable here
chol_rv, *_ = pm.LKJCholeskyCov.dist(n=3, eta=1, sd_dist=pm.Gamma.dist(2, 1))
chol_draw = pt.as_tensor_variable(pm.draw(chol_rv))

# Also fix a mu for testing, so variance from draws of mu don't get into beta
mu_rv = pm.Normal.dist(mu=[1, 10, 100], sigma=10)
mu_draw = pt.as_tensor_variable(pm.draw(mu_rv))

with pm.Model() as m:
    # Sample from the data generating process
    beta = pm.CustomDist('beta', mu_draw, chol_draw, dist=multivariate_laplace)
    idata = pm.sample_prior_predictive(samples=10_000)

Note that I made a change from the wikipedia article, because I don’t multiply the mean by W. I found that this generation process worked for mu=0 (I was able to recover the desired covariance matrix from samples), but not for \mu \neq 0. I checked the source of wikipedia’s algorithm (Kotz, Kozubowski, and Podgórski, 2001), but the wiki faithfully reproduces their algorithm. Just shifting samples, though, allowed me to recover both the mean and covariance. Here’s the samples I get:

I will note that the algorithm proposed on the wiki comes from the chaper on Assymetric Laplace Distributions, so that might be why there’s a difference?