What I meant was that the custom distribution is general, it is just a density function. There is nothing in PyMC that `converts’ it to a likelihood function, as such.
If you had fixed values of model parameters \theta, in principle you could use your custom distribution to provide the probability of seeing random variable x (or potential observations), i.e. P(x|\theta).
It only becomes a likelihood function because you are providing observed data, D and estimating the likelihood of seeing that data given some model parameters, P(D|\theta) (D is not a random variable anymore, but is fixed).
To calculate the log posterior probability, the log likelihood is added to the log prior(s). The sampler picks out values of parameters, passes those to the log likelihood (which has the observed data passed to it as well) and the log prior(s), hence calculates the log posterior. I would recommend reading something like Statistical Rethinking (McElreath) or Doing Bayesian Data Analysis (Krushcke), which provide a good insight into how posteriors can be estimated. @RavinKumar also has a book coming out soon about Probabilistic Programming Languages.