Understand root cause of high memory utilization

Thanks ricardo - I’d like to understand a bit better what factors affect the size of the computational graph. I take it number of parameters is one, but do the below also affect it as well?

  • Number of tune steps
  • Number of draws
  • Number of observations
  • Number of levels (for the same number of parameters - i.e. dimensionality is relevant in this way)
  • Is logp only included if we’re returning the log likelihood, or is that needed anyway?

Just trying to understand what degrees of freedom I have :slight_smile:

Thanks again!