# Understanding mixture distributions

Is it possible to create a mixture of named distributions to estimate parameters of mixture components?

Say I have two banks of coins (A and B) with two fractions of coins in them. Each fraction has its own fairness (p1, p2) and I know the composition of each bank (wA is the proportion of coins 1 in bank A). I draw NA and NB coins from each bank and throw them one by one recording the number of heads (KA and KB).

I would setup it like this (in pseudo-pymc):

``````p1 = pm.Beta('p1', 2,2)
p2 = pm.Beta('p2', 2,2)

pA = pm.Mixture.dist([p1, p2], w = [wA, 1-wA])
pB = pm.Mixture.dist([p1, p2], w = [wB, 1-wB])

likeA = pm.Binomial('likeA', n  = NA,  p = pA, observed = KA )
likeB = pm.Binomial('likeB', n  = NB,  p = pB, observed = KB )

``````

This however does not compile because pm.Mixture.dist cannot accept named distributions.

Or I’m thinking in the wrong direction?

Did you take a look at this notebook? It may be a bit overkill for your purposes, but it should give you some ideas. True mixtures tend to be difficult to sample from (e.g., see here).

You may want each of your likelihoods to be a Mixture of two binomials, one taking `p1`, the other `p2` as the probability parameter.

I think this is a bit different likelihood. A mixture of binomials describes a process of selecting a random coin from a bank and tossing it N times, not selecting N coins and tossing them once.

If I’m not overthinking, this might do the job:

``````p_A = wA*p1 + (1-wA)*p2
p_B = wB*p1 + (1-wB)*p2
``````

Thanks, I’ve seen the second one, but not the first one.

Yes that sounds correct. If you are observing each coin once, your likelihood should then be Bernoulli (or mixture of bernoullis)