This a broad question, as I am trying to understand the usefulness of Bayesian inference when we have a model with multiple variables.

In general, I am interested in the MLE. However, the reason I am using pymc3 is it allows me to inspect the full distribution of the parameter of interest and not just the error interval.

Now, I am getting into models having multiple variables and I am a bit confused. The MLE (through optimization) yields a combination of the parameters maximizing the likelihood.

Bayesian inference, as far as I understand, produces individual posteriors (not multidimensional plan of parameters).

What should I do to find the set of values that maximize the likelihood; could it be that it is simply the mean of each individual parameter posterior?

pointers on that would be very welcome

Thanks