It depends – do you want to make predictions for all of the covariates? If yes, then yes, you will need to grow the B matrix. For example if we have a 3rd equation we care to make predictions for, the system becomes:
And then B = \begin{bmatrix} 0 & \beta_{x,y} & \beta_{x,z} \\ \beta_{y,x} & 0 & \beta_{y,z} \\ \beta_{z,x} & \beta_{z,y} & 0 \end{bmatrix}
On the other hand, suppose you hand some variables you don’t want conditional distributions for, then you don’t need to grow the number of equations in the system, just add more variables:
Now with some abuse of notation (sorry), we can define a new vector z = \begin{bmatrix} z_{1,i} \\ z_{2,i} \end{bmatrix} and x = \begin{bmatrix} x_i \\ y_i \end{bmatrix}, so we now have:
Where B is defined as before, and \Gamma = \begin{bmatrix} \gamma_{x,1} & \gamma_{x,2} \\ \gamma_{y, 1} & \gamma_{y,2} \end{bmatrix}. As before, solve for \mathbb E[x] = \mu and you should get \mu = (a + \Gamma z)(I - B)^{-1}. z will contain all of your extra features you don’t care about (expect insofar as they explain x) and \Gamma is a matrix of linear coefficients linking these factors to the things you care about. You will also put your identifying restrictions into the \Gamma matrix (instead of the silly stuff I did) to make the model more interpretable (or maybe not!)
Of course you can also grow in both directions at the same time, that’s the beauty of the setup. But remember for each new equation you need one more identifying restriction.