Avoiding looping when using GP prior on latent variables

@ckrapu I used your idea pm.gp.LatentKron idea and wrote out the following. It seems to work, compiles and samples pretty fast - very clever trick! Leaving the code below for people to track the solution. FixedMatrixCovariance just returns a fixed identity matrix.

I was able to couple the trick where the covariance matrix is a kronecker product (naturally extends to that case).

class FixedMatrixCovariance(pm.gp.cov.Covariance):
    r"""
    Fixed matrix covariance function.

    .. math::

       k(x, x') = omega(i, j)
    """

    def __init__(self, omega):
        super().__init__(1, None)
        self.omega = omega

    def diag(self, X):
        return at.diag(self.omega)

    def full(self, X, Xs=None):
        if Xs is None:
            return self.omega
        else:
            return self.omega

with pm.Model() as gamma_gp:
    
    #== gamma(i, w) -> size = (I, 52)
    
    gamma_sigma = pm.Gamma("gamma_sigma", 1, 0.01) 
    gamma_rho = 3.    
    cov_gamma = gamma_sigma * pm.gp.cov.ExpQuad(input_dim=1, ls=gamma_rho) 
    cov_gamma += pm.gp.cov.WhiteNoise(1e-6)
    
    # pm.gp.LatentKron idea
    
    identity_matrix = tt.eye(len(I))
    identity_cov = FixedMatrixCovariance(identity_matrix)
    gamma_gp = pm.gp.LatentKron(cov_funcs=[identity_cov, cov_gamma])
    gamma = gamma_gp.prior(f"gamma", Xs=I_W)
    gamma = gamma.reshape((len(W), len(I))).T
    trace = pm.sample(cores=4)