Hi,
I’ve defined a model that fits my data fairly well:
def model(data):
blah blah blah...
trace = pm.sample()
return trace
The data is an array of (~300,2) floats. I would like to run this model over and over with different data. However, there’s a lot of overhead that seems to happen before the actual sampling. Is there any way to cache that and just rerun?