Those lines are doing the same as the ones highlighted here:

It is not so easy to pick apart the lines that you highlighted, unless you know well what is going on with GPs. In any case here is my attempt at doing it:
-
The first Pymc line that you highlighted creates the covariance matrix generating function using the ExpQuad kernel and the inverse of the rhosq, and finally scaling it by etasq. You could write your own method to replace the ExpQuad, as it is simply creating an X*X matrix where each item is given by the squared distance between X_i, X_j, scaled by inverse of rhosq. The helper method just aids in the construction of the matrix.
-
The second line tells PyMC that you are working with a Latent GP, as opposed to a model where the likelihood itself is multivariate Gaussian. The distinction exists because different optimizations can be done depending on whether you have GP latent or GP likelihood (but this is already an API level thing). This line creates a
gpobject that combines the covariance matrix generating function defined in 1., with a mean generating function (the default is to set it to zero, which is the reason you don’t see it explicitly called in this line). -
In the third line you simply pass in the input_data to your
gpobject which it uses to generate samples from the multivariate gaussian created with the GP.
It helps me to think of GPs as a specific (yet very useful way) of parametrizing a multivariate gaussian. The most challenging part is how to parametrize the covariance matrix, and the different kernel and scaling factors do exactly this. It may be helpful to compare it with other models using Multivariate Gaussians that are not GP (they appear before in that same chapter).
The reference docs will also become more understandable with time (at least for me they did!): https://docs.pymc.io/Gaussian_Processes.html