The main reason that the GP is used is to create the 10000 data points from a small set of evenly spread out simulation data as apposed to getting the 10000 data points from just simulations, which is very computationally expensive. Each property that is calculated has its own optimized kernel through Grid Search methods. I looked into the GP module, but I wasn’t sure if it would be running another GP on the data outputted from GP model that was already created, and it seemed that I would have to specify a kernel for the method even though multiple kernels were used to create the posterior data set. I would like to calculate the probability distribution of each parameter as if the data set from the GP were simulation data. I am trying to reduce the amount of external bias on the parameter set’s Bayesian analysis, so I am using a uniform prior. I am not sure if this clarified my situation, but I hope it did.
Also, when I got outputs from the code I posted earlier, it seemed that the parameters only extracted from the prior. I’ve been having a hard time figuring out exactly how to incorporate the parameter priors into the actual code where the priors are p(theta) in Bayes Theorem: ![]()
I saw that in the blackbox example uniform priors were put into the likelihood function using theano.tensor.as_tensor_variable and having observed=(parameters) in the likelihood function. The likelihood function was custom so the example uses DensityDist. Is there a way I can incorporate the parameters like this example through MvNormal?
Thanks for the help so far, I didn’t realize that putting bounds on the priors would make them informative even though that seems pretty obvious now.