I am currently using a custom model implemented with DensityDist. However, I am running into an issue at the end when it saves data to an arviz dataframe. What PyMC seems to be doing is re-calculating the loglikelihood of every single parameter set as is saves to the arviz dataframe, taking a significant amount of time. My custom likelihood function takes on the order of 1 ms to run, so having to recalculate the loglikelihood of my data takes a prohibitive amount of time.
Does anyone know a workaround for this that will cause PyMC to save the loglikeihood in the data frame as its sampling rather than recalculating at the end?