How does sampling effect the logp?

This is an interesting question and kind of the questions that I like: exactly what is model fitting/calibrating, and how does that related to sampling.

First thing to remember is that, model logp is a function that takes input and split out output. Once you have your model defined, the logp is fixed. It takes free parameters and input and output a scaler. In this case you have two input mu and std.
Now, think of a traditional sense of model fitting that gives you a single “best” value for each free parameter. But even if you do model fitting, you dont change the model logp. In that sense, I like to think of modeling as constructing a space, and our goal is to get information from this space. Some times taking one point from this multi dimension space is enough for your application, thus we do MLE to get a vector of best value. But most of the time we need more, thus where sampling comes in, which you map out the geometry (approximately) of said space.

What helps is to have a more intuitive understanding of (log)likelihood function, you might find my recent talk @pydataberlin useful: https://github.com/junpenglao/All-that-likelihood-with-PyMC3

2 Likes