Thanks, @bwengals, for the response, and apologies for the delay in answering
grid is sort of a wrapper object that keeps track of GP solutions (cov[i], alpha[i]) for some observable i. grid.range() gets the min and max of the coordinate arrays for a given dimension of the GP:
def range(self, paramname):
gridvals = np.array(self.grid_yaml_cfg['gridprops'][paramname]['vals'])
return (gridvals.min(), gridvals.max())
grid.predictt() basically stacks the outputs from each observable’s GP:
def predictt(self, X1):
results_cols = tuple(gp_grid.gp_predictt(cov, alpha, self.X0, X1)
for cov, alpha in zip(self.covs, self.alphas))
return theano.tensor.stack(results_cols, axis=1)
However… I did some other experiments late last week, where I decreased the number of observables to 2 or 3. If most of the time were being taken up in the GP evaluation, then I’d expect the model to sample faster. In fact, the speed was essentially identical. This could be a red herring, but I wonder if perhaps the model (not the GP) is the problem.