Loglikelihood of posterior predictive samples

Calculating logp is always going to be slower [citation needed] than taking random draws, which is what sample_posterior_predictive does. Iterating over the trace is not so bad, just make sure you cache the logp function. Everytime you call testmodel.datalogpt.eval() it is compiling the same function again.

You might want to try something like this (assuming you aren’t doing it already):

datalogp_fn = testmodel.fastfn(testmodel.datalogpt)
flattened_trace_numpy = ...
logp = [datalogp_fn({"a": trace_value}) for trace_value in flattened_trace_numpy]

Apologies for the broken pseudo-code.