I am stuck at http://docs.pymc.io/notebooks/gaussian_mixture_model.html, the part where we use pm.Potential in setting up the model, I am unable to understand it’s necessity in the example, also how does p_min_potential or order_means_potential affect the distribution of p or means. Any help would be much appreciated!

`pm.Potential()`

is additional terms you add to the final model logp, usually we use it to put additional constrain to the log-likelihood. I think the key is first understand how potential works.

For example, say you are trying to model a parameter x, but instead of what we usually do by putting a prior on the parameter: x~dist, you can put a flat prior on x, but add the prior information using potential.

Notice that doing so from the perspective of model logp is the same, as `logp = logp param given prior + logp evidence given param`

.

Now, say you use MH to sample, then at each t you propose a value (sample) and accept or reject by compare with the previous sample. This is the foundation of how we can use potential to add constrains to the model, which is what you see in the example above. For example, say you want to model x1 and x2 with x1 < x2 (ie they are ordered). You can add a potential to the model so that whenever x1 > x2, the model logp goes to -inf. In another word, whenever the proposal value x1_p, x2_p that x1_p > x2_p, the logp becomes -inf and the proposal will be rejected, resulting the desired constrains we put on x1 and x2.

Thank you! This clears it up