It’s kind of a strange question. Are you sure that you want to plot the likelihood instead of the model’s logp?
The model defines a probabilistic graphical model. The first question you should answer is: what is the likelihood in that graph? It should be proportional to the marginal logp of the observed RVs. What is the likelihood if you have more than one observed RV? And if one observed depends on the other? And if there are potentials?
Let’s just assume that your model has a single observed RV and no potentials. The trace holds parameters that are distributed consistently with the posterior probability distribution. To get samples that follow the posterior we use the model’s logp. It’s easy to compute the model’s logp for all points in the trace like this:
import numpy as np
import pymc3 as pm
# Define a toy test model and get a trace
with pm.Model() as model:
mu = pm.Normal('mu', 0, 1)
sigma = pm.HalfNormal('sigma', 1)
obs = pm.Normal('obs', mu=mu, sigma=sigma, observed=np.random.normal(loc=5, scale=3, size=100))
trace = pm.sample()
logp = model.logp
model_logps = np.array([logp(point) for point in trace.points()])
You can then plot the “distribution” of the model logp's values across all points in the trace like this:
import seaborn as sns
sns.kdeplot(model_logps)
which gives you roughly something like this:

You could compute the observed’s logp, which should be analogous to the likelihood of the observations:
obs_logp = obs.logp
likelihood = np.array([obs_logp(point) for point in trace.points()])
sns.kdeplot(likelihood)
which gives you something like this:
