In v3, there was a discussion about using the component distributions of a mixture model to determine the likelihood that a given data point belongs to one of the components.
In v5, is there an easy way to access logp for the mixture components so we can do something similar with the new-and-improved Mixture class?
The last example in this blogpost uses sample_posterior_predictive to do exactly that: Out of model predictions with PyMC - PyMC Labs
If you want the maths behind it, section 4 here explains it simply
Note compared to the link above, there is a further normalization factor (basically sum( w * prob)) in the equation that guarantees probabilities sum to 1.