 Selecting prior distributions for Cauchy as likelihood distibution

I came across a dataset where I checked which distributions fits the new dataset and it turns out to be Cauchy Distribution. I already know that there is a Cauchy function in PYMC3, but, what I am wondering is what type of priors to use where Cauchy is the likelihood distribution?

I looked into wikepedia and understood that scale parameter is greater than 0 so I am thinking halfnorm for the scale/beta parameter. but with location/alpha parameter, mmy questions are the following:

• is it possible to assume to use norm distribution for alpha?
• input the values into Cauchy likelihood distribution?
• what are possible distribution to use as prior?
• is it possible to use exponential instead of halfnorm?

I have created the following model:

with pm.Model as model:
# Prior Distributions for unknown model parameters from posterior distriution:
sigma = pm.HalfNormal('sigma', sigma=1)
mu = pm.Normal('mu', mu=0, sigma=1)

# Observed data is from a likelihood distributions (Likelihood (sampling distribution) of observations):
observed_data = Cauchy('observed_data', alpha=mu, beta=sigma, observed=data)
# obtain starting values via MAP
# startvals_P = pm.find_MAP(model=model_P)

# instantiate sampler
# step = pm.Metropolis()  ## Best one
# step = pm.HamiltonianMC()
# step = pm.NUTS()
# step = pm.sample_smc(n_steps=10, cores=2,progressbar=True)

# Printing the result of log_likelihood:
# print('log_likelihood result:', model)

# draw 5000 posterior samples
trace = pm.sample(draws=1000, tune=1000, chains=3, cores=1, progressbar=True)
# trace = pm.sample(start=startvals, draws=1500, step=step, tune=500, chains=3, cores=1, progressbar=True)

# Obtaining Posterior Predictive Sampling:
post_pred = pm.sample_posterior_predictive(trace, samples=1000)
# post_pred = pm.sample_posterior_predictive(trace, samples=1500)
print(post_pred['observed_data'].shape)
print('\nSummary: ')
print(pm.stats.summary(data=trace))
print(pm.stats.summary(data=post_pred))
########################################################################################################################
return trace, post_pred
1 Like

I think you can choose similar parameter to a gaussian likelihood (a location parameter and a scale parameter).
However, Cauchy likelihood is pretty rare choice as it indicates you can observe inf and -inf. If you just want to model heavy tail observation a student t likelihood is probably better.

1 Like

So, if I will go with StudentT distribution should I keep the same prior distributions?

Yes, you also need to assign a prior on degree of freedom df.

Okay, I will give it a try. Thanks

1 Like

I am sill waiting the results of using StudentT distribution, but, I have a question, when I ran the Cauchy distribution, I keep on getting an error with the sigma part where the error I that I got is:

ValueError: Mass matrix contains zeros on the diagonal.
The derivative of RV `sigma_P_log__`.ravel() is zero.

What’s the cause of getting this error and how can I solve this error?