I am modelling 2 sets of parameters, c (4 x 4) and theta (1 x 4), each having a related likelihood function like the following. However when I sample
model = pm.Model()
with model:
# Priors for unknown model parameters
hyper_mu = pm.Normal('hyper_mu', mu=0, sd=10)
hyper_sd = pm.Gamma('hyper_sd', alpha=0.01, beta=0.001)
c = pm.Normal('c', mu=hyper_mu, sd=hyper_sd, shape=(4, 4)) # We want this
# Prior for nuisance parameter
theta0 = np.ones(4)
theta = pm.Dirichlet('theta', theta0, shape=(1, 4))
# Likelihoods (sampling distribution) of observations
p = sigmoid(c)
L1 = pm.Binomial('L1', n=N_data, p=p, observed=observed_successes)
p2 = pm.math.sum(L1 * theta, axis=1) # SHAPE: (4,)
L2 = pm.Binomial('L2', n=N_data2, p=p2, observed=successes2)
When I do model.check_test_point()
, I get:
> L2 -inf > L1 -7855.870000 > c -51.540000 > hyper_mu -3.220000 > hyper_sd_log__ -4.660000 > theta_stickbreaking__ -3.750000 > Name: Log-probability of test_point, dtype: float64
Which seems to suggest my L2
initialisation is bad, but I’m not sure how to correct this… I am certain that the likelihood is given by sum(L1 * theta, axis=1)
and I am giving the correct data (N_data2, successes2) in the right shape (4,).