How to determine identifiability of parameters under uncertainty?

I will make my question short and directly. Does anybody knows how to determine if a parameter of model is identifiable (or not), with pymc3???

I am afraid your question is too vague for a simple answer. In general identifiablity issues can be detected by reasoning mathematically about the model or by exploring patterns of ineficiencient sampling (slow convergence and/or multimodal symmetric posteriors). I don’t think there is an automatic way to identify identifiablity issues.

Dear @ricardoV94,

Thanks for the answer. Following your suggestion let me improve my question then.

I found in this paper link to paper, deterministic ways to determine the un-identifiability of parameters (going from Fisher’s matrix to profile likelihood). I was wondering if there is something similar for the case of “noisy-random” observed data.

Thanks for the reference. It was an interesting practical take on identifiability issues, which I wasn’t aware of.

I am not sure what is meant by “noisy/random” data. Some model misspecification / departures from the likelihood function?

One thing that makes identifiability analysis less straightforward in a Bayesian setting is that the distinction between latent and observed variables is not as clear as in MLE. In particular it depends on how strong your prior information is. Two parameters that are jointly unidentifiable in MLE maybe identifiable in a Bayesian setting if given precise enough priors.

I found a very vague discussion here in relation to Bayesian models:

https://statmodeling.stat.columbia.edu/2014/02/12/think-identifiability-bayesian-inference/

Thanks for answering. I referred to “noisy/random” data as a justification to use Bayesian inversion… If the observed data is deterministic (error-free) and the model to fit is also deterministic, then you can use least squares and that’s the end of the problem, at first sight.

When the observed data, has a level of noise, the least-squares is replaced by Bayesianism and instead of obtaining a “unique” tuple that minimizes the “residual definition” you obtain a probability that maximizes the likelihood functions.

Anyway, there is a lot of literature in the field of identifiability for deterministic models and fittings, but I couldn’t find the equivalent for Bayesian inversion.

Will keep updating the thread.