Memory spike at the end of the MCMC sampling

I am also curious about updates to address this problem. Also, would this be the same reason for out-of-memory errors during the “transforming variables” process after sampling with the Numpyro JAX backend?

Lastly, just to confirm, when @OriolAbril says “The default is to store such data because it is required for loo/waic calculation and further model comparison,” does that mean we would not be able to compare these models with others using LOO or WAIC if we set idata_kwargs={"log_likelihood": False}?