How to only save some of the variables while sampling?

I’m going to bump this. It seems like a great feature to have or trick to find. Is there any way that I can create the distribution I want in PyTensor so that the variable is sampled and the logpdf evaluated, but the value itself is not stored?

One idea we had was to get the full sampled posterior points, discard some of them, and then resume sampling at the previous final points. For this we would need to carry forward some state information (the current tuning), although I suppose we could retune if it came to that.

Would this work?
Opher