Using Bayesian for estimating parameter

Hi all,
I am new-member user Pymc3 package. I has a problem that need people to help me.
My problem:
I have a range of value example: S is coefficient with the uniform distribution [0,1]. through the experiment I has the K value follow the normal distribution (1.3,0.05) (the likelihood P(k/s)). How to apply the Bayesian inference to find the Posterior : P(s/k).
I hope that everybody can help me.
Thank you very much

I’ll have to be very honest, I didn’t understand your specific example and likelihoods. My suggestion is that you read and imitate the getting started guide. That way you should be able to phrase your specific question in more detail so we can help you.

My problem is finding the Posterior distribution:
Via the Bayesian theory Posterior ~ likelihood function * Prior distribution.
in here: I has the likelihood function P(k/s)has the normal distribution with mean = 1.3 and standard deviation = 0.05 and Prior distribution P (s) is uniform distribution [0,1].

How to find the posterior distribution P(s/k) by bayesian method and Markov chain Monte carlo in pymc3.

Thanks you.

I think there is a lot of subtle naming issues with your example. I’ll try to explain the basic process of writing down a model to infer some variables’ posterior distribution given some observed values. I still strongly recommend that you read the entire getting started guide because all the process that I’ll go through is also explained there.

The first step to get any posterior inference set up is to specify the statistical model that we assume underlies the data generation process for the thing that we will observe. Mathematically, what this means is that we will try to write down the joint probability that a given observation occurs (the likelihood) as a function of some parameters that we may know with absolute certainty, or that we don’t know and thus represent as unobserved random variables. This latter case means that we must assume that a priori that these uncertain parameters must come from some other probability distribution. This means that the statistical model will have three main parts:

  1. Some variables that we will be able to observe somehow, for which we write the likelihood as a function of some parameters
  2. Parameters for which we don’t know with absolute certainty their values, so we assume a priori that they are sampled from some probability distribution (which can have parameters itself, leading to hierarchical statistical models)
  3. Parameters whose values we know with absolute certainty (so there is no a priori probability distribution associated with them).

I’m sorry if this is known to you but I felt that there was some confusion in your post because you said that your observations come from a normal with some parameters values that you already knew, and then there was a uniform prior, which was not associated to any parameter of your observed data. The terminology can be a bit confusing but it is very important.

Ok, we went through some really general stuff and now we’ll see how the statistical model is specified in pymc3 and then how you infer the posterior distribution.

First I’ll write down how a particular model would look like with just mathematical notation:
obs_{\mu} \sim N(\mu=1.3, \sigma=0.05)
obs_{\sigma} \sim U(\mathrm{lower}=0, \mathrm{upper}=1)
obs \sim N(\mu=obs_{\mu}, \sigma=obs_{\sigma})

The statistical model that I wrote down represents the process we assume underlies the generation of observations obs. I assume that we don’t know a priori with full certainty neither the mean nor the standard deviation of the observations. I also assume a prior distribution for the observation’s mean and standard deviation parameters (a normal for obs_{\mu} with 1.3 mean and 0.05 deviation, and a uniform between 0 and 1 for obs_{\sigma}).

This model would be written in pymc3 as

import numpy as np
import pymc3 as pm

observed_values = np.random.randn(100) # Here you would put your actual observations
with pm.Model() as model:
    obs_mu = pm.Normal('obs_mu', mu=1.3, sigma=0.05)
    obs_sigma = pm.Uniform('obs_sigma', lower=0, upper=1)
    obs = pm.Normal('obs', mu=obs_mu, sigma=obs_sigma,
                    observed=observed_values)

Note that it is really easy to write down the statistical model with the conditional dependencies between the a priori unknown parameters and the observed data. Now we’ll draw samples from the posterior distribution like this:

with model:
    trace = pm.samples(2000)

The trace contains samples of your unobserved a priori unknown random variables that match their posterior probability distribution given the observed data. You can view the trace in many ways but I’ll show the two simplest ones (refer to arviz for a detailed list)

with model:
    pm.plot_posterior(trace)
    pm.traceplot(trace)

The first plots the marginal posterior distributions for each of the unknown parameters (for the joint probability you have to look at plot_pair. The second shows the trace of the markov chain for each parameter.

I’m sorry if I still haven’t understood your particular statistical model because you seem to already know all the observation’s distribution parameter values, so I don’t know to which parameter the prior corresponded to. At least I hope that I was detailed enough to help you express your statistical model more clearly.

1 Like

Thanks you replied my post.

I will present for you the detail my post. the first I only know the value range of S [0,1]. I carried out random the S value with seed (20). I have the 20 values and then I experiments with this 20 values. I have corresponding 20 values of K. I see the K coefficient have the normal distribution with mean 1.3 and standard deviation = 0.05. Now I want to consider the posterior of the S coefficient. How to consider the posterior of the S coefficient ?
thanks you.

Your generative process is still rather ambiguous. What I understand is that the variable S is a prior unknown, and is assumed to be sampled from a Uniform(0,1). Then you get 20 samples of another variable K, which you would treat as your observed data.

How does S relate to K?

This question is the key to fully set up your generative process. For example, let’s imagine that S determines the standard deviation of K, and that the mean is always 1.3. In this case you would write down your model as

with pm.Model():
    S = pm.Uniform('S', 0, 1)
    K = pm.Normal('K', mu=1.3, sigma=S, observed=your_observed_data)
    # And finally you infer the posterior of S doing this
    trace = pm.sample()
    pm.traceplot(trace)
    pm.plot_posterior(trace)

This is an interpretation from what you wrote of your case, but the key link that you still need to write down is how does S condition the values observed in K? I honestly, cannot help you with that because it depends entirely on your experiment setup. Once you sort that out, you should be able to write down your model and use sample to infer S's posterior

I am referring the document. At the 7 page, the same way which I am doing. The likelihood just is estimated through Stochastic method based on gPC. Now I have a histogram as the power in the document depends on s.
How to apply the bayesian infernce to that?

Looking at that document you have the likelihood of the raw signal d_{i} as

\mathcal{L} = \prod\limits_{i}\mathrm{N}\left[d_{i}\mid\mu=F_{i}(h, t_{c}, L_{c}), \sigma=\sigma_{i}\right]
So the raw signal is supposed to be normally distributed with mean F_{i}(h, t_{c}, L_{c}), and standard deviation \sigma_{i}. The function F depends on the parameters h, t_{c} and L_{c}, on which you should place some form of priors (these are sketched in slide 6). Ideally, you should just need the raw signal d_{i}, and the function F_{i} to do the bayesian inference for the three parameter posterior probabilities.

The presentation says that they use the histogram and spectrum of the d_{i} measurements to get some statistics for the parameters. They don’t seem to use them to do bayesian inference. If you wanted to do that, then you should compute the likelihood of getting a certain value of the power spectrum, given that the raw signal, d_{i}, is distributed as a gaussian with the above parameters. You could be able to do this with the formulae for the power spectral density, but this involves some complicated math (Fourier transform of your raw stochastic signal, auto-correlation times, finite size effects), and we cannot help you with that.

Again, for your problem, you should only need the function F_{i}, some priors over the three parameters h, t_{c} and L_{c}, and would then be able to infer those posterior distributions with:

with pm.Model():
    h = pm.Normal('h', 0, 1) # Some more appropriate prior
    t_c = pm.Uniform('t_c', -1/omega_ci, tau_e) # Prior taken from slide 6
    L_c = pm.Uniform('L_c', -rho_i, a) # Prior taken from slide 6
    sigma = pm.Gamma('sigma', 1, 1)  # Some more appropriate prior
    d = pm.Normal('d', mu=F(h, t_c, L_c), sigma=sigma,
                  observed=observed_raw_data)