I am attempting to estimate the conversion rate over a number of groups (in the 1000s).
My data is stored as an array of arrays, where each inner array is of variable length, as each group has a different number of observations.
In this issue, the observations were all of the same length and could therefore be stacked. In this issue, a new variable was created for each observation.
My current solution has been to do this:
traces = []
for i in range(len(obs_data)):
with pm.Model() as model:
p = pm.Uniform('p', 0, 1)
obs = pm.Bernoulli('obs', p, observed=obs_data[i])
trace = pm.sample()
traces.append(trace['p'])
which does work but is inefficient. Am I using the wrong tool for this job?