There are a lot of ways to do this.
Unfortunately, this isn’t one of them. If you use these as priors and look at Bayesian posterior means, there is zero probability (measure zero) that you will bet a posterior mean of zero, because it’s a continuous distribution.
L1 (lasso) regularization can force actual zeros if you use maximum likelihood. L2 (ridge) won’t.
The only way to assign a non-zero probability of zero in the posterior is to have a non-zero probability in the prior, which means a spike-and-slab prior. That is, you make the prior a mixture of a probability mass at zero and a continuous density elsewhere. This will let the prior, and hence the posterior, assign probability mass at zero. Otherwise you never get posterior probabity mass of zero in a Bayesian approach because it’s a measure zero set in a continuous distribution.
[EDIT: But, you’re not going to be able to fit spike-and-slab with HMC/NUTS, because the marginalization is combinatorial (it will work with a handful of coefficients, but not more). You might be able to get PyMC to fit it by sampling slab/no-slab with a discrete sampler, but the problem there is that this is an NP-hard problem in general, so no way to guarantee you get reasonable answers everywhere in reasonable time.]
P.S. There’s an elastic net prior that combines the goodness of L1 (can get actual zero results) and L2 (will identify coefficients for collinearity). But again, it won’t work with Bayesian posterior inference, only with penalized maximum likelihood. And even then, you have to write a special optimizer to get an actual value of zero after finitely many iterations (it has to truncate at zero otherwise it will see-saw).