I got the result like this figure (with prior distribution of Uniform[-100,100]). Does it mean that the posterior of this paramater is approaching zero?
If the paramater approaches to zero, the model constructed will make no sense. So when the paramater approaching to zero, halfnormal-distribution-like posterior (as is shown in the figure) arise, does it mean that sampling comes to error, or the likelihood, which is the hypothetical model in my study, is not reliable?
If the problem occurs in the likelihood, actually my target model is complex with 14-20 parameters, shall I choose other bayesian method to complete paramater estimation? And if I can find a bayesian method to estimate so many parameters? Thanks!
And this is the source code, which is a preliminary simple model. I want to add the parameters, but as the parameter increases, sampling is difficult to converge. That is why I wonder to find a new bayesian method.
There is a lot packed in that question. I see two options:
The model/inference goal is theoretically fine but your data is too noisy and it allows for invalid parameters, in which case you might want to make your prior smarter / in line with your prior knowledge (for example regardless of whether it makes sense to model our data with a Normal distribution or not, there is no scenario in which it would make sense to allow for negative standard deviations) or get more data.
Your zero parameter is relevant to your model/inference goal and its telling you that things are not working as expected. In this case you should not try to change the model just so that it now works and gives nice answers (since you might be biasing it).
I think more context is needed to help disambiguate between options 1. and 2.
What I do is to find a suitable model to explain the reaction balance. Due to the sensitive system, I have to try different models with different reaction mechanism.
That is why I think the model is unreliable.
But right, maybe the data is also not enough.