Wow thanks again for the quick reply
I want to understand really well how this works, because it was puzzling me how to handle this and the lack of “observed” variable had me confused.
If I understand the “Ordered2D” transform intuitively, it basically takes the first latent dimension from “latent” as is, and then the [1:] dimensions from latent get transformed so that they represent the increase in magnitude from one dimension to the next one. Then, even though my mu_hat samples might be out of order, the “latent” samples will be ordered.
So my question is, why does the model care about the possible mis-ordering of mu_hat, if the Ordered2D transform by definition corrects this?
Sorry if my question isn’t making any sense! I’ve appreciated your help greatly so far