Hello there, this is my first question here.
Motivated by the capabilities of PyMC3, I tried to use it as a method to fit a function to a certain data, and also as a method to estimate the uncertainties in the fitting. Before doing this I use to code using lmfit (to fit) and emcee(as mcmc sampler)… I notice that the method
map_estimate = pm.find_MAP(model=basic_model,method='Powell') returns the likelihood estimator (i.e the tuple that fits the data)…
As you can see in the next figure I have compared the lmfit best fit and the pm.find_MAP() likelihood fit… I was wondering why the MLE result is so far from the lmfit result… As a comment I used the same ranges of variations for each method… In PyMC3 I used uniform priors in order to compare the solutions.
Eventually I have thought an alternative to bypass this issue. When defining the log-likelihood function, I can use lmfit as obvserved data…
Y_obs = pm.Normal('Y_obs', mu=mu_val, sigma=sigma_val, observed=lmfit_result). My question is how accurate/robust can this be to estimate the uncertainties in the best-fit values???