Probably a stupid question, and this isn’t quite my situation but as close as I think I can get while keeping it simple.
Lets say I have a model where I think a common set of X features correlate to 2 observeds, and I create 2 likelihoods, e.g.:
coords = dict(x_jv = dfx.columns.drop(['y1', 'y2']).values)
coords_mutable = dict(oid = dfx.index.values)
with pm.Model(coords=coords, coords_mutable=coords_mutable) as mdl:
y1 = pm.MutableData('y1', dfx['y1'].values, dims='oid')
y2 = pm.MutableData('y2', dfx['y2'].values, dims='oid')
x = pm.MutableData('x', dfx.drop(['y1', 'y2'], axis=1).values, dims=('oid', 'x_jv'))
# 1. Create linear models
b1 = pm.Normal('b1', mu=0.0, sigma=1.0, dims='x_jv')
mu1 = pt.dot(b1, x.T)
sigma1 = pm.InverseGamma('sigma1', alpha=5.0, beta=4.0)
b2 = pm.Normal('b2', mu=0.0, sigma=1.0, dims='x_jv')
mu2 = pt.dot(b2, x.T)
sigma2 = pm.InverseGamma('sigma2', alpha=5.0, beta=4.0)
# 2. Condition using observed
_ = pm.LogNormal('yhat1', mu=mu1, sigma=sigma1, observed=y1, dims='oid')
_ = pm.LogNormal('yhat2', mu=mu2, sigma=sigma2, observed=y2, dims='oid')
How are these two independent likelihoods treated during sampling? Are their log-likelihoods summed together?