For loop vs shape parameter

I am working on an API-accessible sampling procedure, where a user can input priors and observed data and obtain posterior distributions.

Lets say we have 100 different observed arrays and priors, all to be sampled at the same time. Is there a difference between using

prior_array = pm.Normal("n",mu=priors, shape=len(priors)
pm.Normal("likelihood",mu=prior_array,observed=observed_arrays) #len(observed_arrays)=len(prior_arrays)=100)

and doing a for loop where we specify one prior and one lilkehood function for each array:

for i in range(100):
    prior = pm.Normal("n"+str(i),mu=prios[i])
    pm.Normal("likelihood"+str(i),mu=prior,observed=observed_arrays[i])

Yes, the loop approach is usually much slower, as it requires keeping track of more variables in the background and looses benefits from vectorization.

1 Like

I have a followup question. I am Doing a sample_posterior_predictive with shape parameters on the priors, and separating the observed values belonging to each prior by indexing the priors to fit a flattened observed array(originally array of arrays of observed values).

The return-value from the sample_posterior_predictive does not have a seperate posterior for each of the inferences being performed. It seems to have one trace for each observed value. Is there a way to obtain one separate trace for each posterior?