I currently have a model that samples a real-valued latent variable R, the latent variable should infer two observable features, one being a bernoulli variable and one a real valued variable. For example sake:
R = pm.Normal('latent', 0, 1, shape=2)
p = pm.Math.sigmoid(R[0]-R[1])
diff = some_real_func(R[0]-R[1])
bernoulli_like = pm.Bernoulli('b_likelihood', p=p, observed = observed_binary_values)
real_like = pm.Normal('r_likelihood', diff, sd=0.5, observed = observed_real_values)
How does PyMC sample posterior based on both likelihoods? I noticed that it’s possible for one likelihood (e.g real_like) to be very large in general, for example in ADVI the ELBO would converge toward 1e+5 while the bernoulli likelihood could converge toward 2000. Would the sampler in this case only focus toward sampling to accommodate the real-valued likelihood? Does PyMC sum both logp’s under the hood?