Memory spike at the end of the MCMC sampling

I was wondering if there is an updated approach to handling this memory spike when calculating the log_likelihood after sampling has completed? I’ve tried adding the following code line to enable Dask usage but am still running out of memory trying to fit the log_likelihood: az.Dask.enable_dask(dask_kwargs={"dask": "parallelized"})