Bernoulli correlations

Hi, All
For example, | have a two biased coins where Bernoulli parameters of their is unknown for me.
But I know that both of it depend on the single latent variable.
In general, I have P(θ1|_latent) and P(θ2|_latent) distribution, but, for simplification,
lets dependency will be linear, so θ1 = link(_latentc1), θ2 =link(_latent c2), where c1 and c2 constants
is known.
Now, I got 100 trials for first coin, and 50 for second. How I can join that information to infer the _latent parameter ?

That’s fine, you will have two likelihood, two expressions for the thetas, but one distribution/model for the hierarchical latent variable. Since the latent variable is at the top of the graph for both likelihoods, it will be influenced by both.

Thank you! But, it can somehow be described as pymc3 snippet?

I think something like this is what you want:

import pymc3 as pm
import numpy as np
import pylab as pl
from scipy.special import expit

## Generate data
N0, N1 = 1000, 500
true_latent = 0.2
c0, c1 = 1., 5.
true_p0 = expit(c0 * true_latent)
true_p1 = expit(c1 * true_latent)

data = [(np.random.rand(N) < p).astype(np.int) for p, N in zip([true_p0, true_p1], [N0, N1])]

## Define model
with pm.Model() as model:
    latent = pm.Normal('latent', 0., 1.)
    c0 = pm.Exponential('c0', 1.)
    c1 = pm.Exponential('c1', 1.)
    p0 = pm.Deterministic('p0', pm.math.sigmoid(c0 * latent))
    p1 = pm.Deterministic('p1', pm.math.sigmoid(c1 * latent))
    pm.Bernoulli('x0', p0, observed=data[0])
    pm.Bernoulli('x1', p1, observed=data[1])

## Inference
with model:
    trace = pm.sample(1000, target_accept=0.99)

Note that this particular code doesn’t yield a good fit because the model allows ambiguities between latent variable and the constants c0, c1. But in principle, I believe that this does what you’re asking for: