Systematic introduction of coords, dims and shapes

Hi dear Bayesians,

I made my way through the Intuitive Bayes Introductory Course and the Practical MCMC Course to now go through the Advanced Regression Course. I am really happy to find such courses to get kickstarted in the Bayesian world. I guess that without these courses, the path into this world would have been much more difficult and time consuming.

I am however pretty confused about all the intricacies related to coords, dims and shapes. This pretty much reminds me at really difficult shape handling in DNNs, when you are in some cases more busy with the formal shaping of your structures than with the actual modelling. This is kind of sad because you would like to concentrate on the modelling (semantically) and not on making shapes fit to each other.

In the example of Categorical Regression which is presented by Alex Andorra the regression is conceptually actually pretty simple - just one categorical and one numerical predictor - but the number of coord and dims and shapes that have to be matched correctly to each other is mind blowing at least for me right now.

I would therefore like to ask you if there is somewhere a systematic introduction in the whole issue of coords and dims and shapes that explains it step by step for Dummies like me. Something like a series of notebooks or tutorials which handles all the concepts and details and explains everything step by step.

As this is my first question here, I hope it makes sense and I didn’t oversee anything basic :-).

Best regards
Matthias

There are two examples that might be nice to look like:

The first one is a skosh on the technical side in my opinion, so it might feel like you’re reading it underwater. The second on is actually about pm.Data, but it includes a lot of discussion on what are dims and why you might want to use them.

Shapes and broadcasting are really at the heart of all vectorized computing, so resources like the numpy docs on broadcasting are also good to look at.

I will say that the Categorical distribution is sort of a unique case in PyMC (along with LKJCholeskyCov), because the dimension of the input parameters, that is the vector of class probabilities p, does not match the dimension of the outputs, which is just a single scalar (an integer representing the class assignment). This is weird, and makes it harder to reason about what dims to pass in.

1 Like

Categorical is a vector to scalar function, the signature is (p)->(). Once you understand the core case, extra dimensions always behave regularly

I would also suggest @OriolAbril 's blog post as well as my own follow-up.

2 Likes