Looking at that document you have the likelihood of the raw signal d_{i} as
\mathcal{L} = \prod\limits_{i}\mathrm{N}\left[d_{i}\mid\mu=F_{i}(h, t_{c}, L_{c}), \sigma=\sigma_{i}\right]
So the raw signal is supposed to be normally distributed with mean F_{i}(h, t_{c}, L_{c}), and standard deviation \sigma_{i}. The function F depends on the parameters h, t_{c} and L_{c}, on which you should place some form of priors (these are sketched in slide 6). Ideally, you should just need the raw signal d_{i}, and the function F_{i} to do the bayesian inference for the three parameter posterior probabilities.
The presentation says that they use the histogram and spectrum of the d_{i} measurements to get some statistics for the parameters. They don’t seem to use them to do bayesian inference. If you wanted to do that, then you should compute the likelihood of getting a certain value of the power spectrum, given that the raw signal, d_{i}, is distributed as a gaussian with the above parameters. You could be able to do this with the formulae for the power spectral density, but this involves some complicated math (Fourier transform of your raw stochastic signal, auto-correlation times, finite size effects), and we cannot help you with that.
Again, for your problem, you should only need the function F_{i}, some priors over the three parameters h, t_{c} and L_{c}, and would then be able to infer those posterior distributions with:
with pm.Model():
h = pm.Normal('h', 0, 1) # Some more appropriate prior
t_c = pm.Uniform('t_c', -1/omega_ci, tau_e) # Prior taken from slide 6
L_c = pm.Uniform('L_c', -rho_i, a) # Prior taken from slide 6
sigma = pm.Gamma('sigma', 1, 1) # Some more appropriate prior
d = pm.Normal('d', mu=F(h, t_c, L_c), sigma=sigma,
observed=observed_raw_data)