I am making inference on the parameters of a power law y = \lambda x ^ {\alpha} with 0 <b <1. Obviously, the parameters a and b are somehow dependent on each other and the plot demonstrates this. At this point, I would like to apply the Thompson Sampling algorithm for extracting the curve and I would like to draw a sample from the posterior probability for each parameter, such that I am respecting the density of the sampling. However, if I randomly extracted parameters I have the impression that I would not respect the dependence of the parameters, so I wonder if there could be a way to condition the extraction of one parameter to the value of the other, or something that can make the extraction taking into the account the dependence of the parameters.
First of all, how PyMC3 manages the dependence between parameters?
Thanks a lot guys in advance, below the spec of my model.
Sorry, if my questions could sound dumb
This is the model:
with pm.Model() as model:
alpha = pm.Beta('alpha', alpha= 3, beta= 2) # prior probability of alpha parameter
epsilon = pm.HalfNormal('epsilon', sigma= 5) # prior probability of error
lam = pm.HalfNormal('lam', sigma= 5) # prior probability of lambda parameter
link = pm.Deterministic('link', lam*cost**alpha)
likelihood = pm.Normal('conversions', mu=link, sd=epsilon, observed=lead)
trace = pm.sample(25000, chains = 2, cores=2, target_accept=0.95, return_inferencedata=True)
And this is the pair plot
2 Likes
Welcome!
If I understand your question correctly, then the dependency you wish to capture/respect are reflected in the plots you are seeing, yes? For example, the strong dependence between lam
and epsilon
? If so, then grabbing individual samples from trace
(or rather trace.posterior
) will give you a draw from the joint posterior, so all the dependencies you see in that pair plot will be retained.
# grab some "random" samples
samp_idx = [1150]
chain_idx = [0]
for s,c in zip(samp_idx, chain_idx):
sample = trace.posterior.sel(chain=c, draw=s)
print(sample)
Gives you a sample of all parameter values:
<xarray.Dataset>
Dimensions: ()
Coordinates:
chain int64 0
draw int64 115
Data variables:
alpha float64 0.691
epsilon float64 3.737
lam float64 3.705
link float64 3.705
Attributes:
created_at: 2022-03-19T16:12:04.598767
arviz_version: 0.11.2
inference_library: pymc3
inference_library_version: 3.11.2
sampling_time: 3.267703056335449
tuning_steps: 1000
2 Likes
Thanks a lot, that is precisely what I was looking for.
One more thing: I notice a strong inverse correlation between lam and alpha (the constant and the exponent of the power law, and that seems coherent), isn’t that right?
You are correct that it’s alpha
and lam
. And that dependence definitely makes sense. The deterministic link
increases as either lam
increases or `alpha increases. So should be some degree of uncertainty about what specific combination of these two parameters is ultimately “the right” combination.
1 Like