Latin Hyper Cube sampeling

Hi

I have a situation with fitting parameters for a large model. One sample model can take days to run. The implication is few samples.

For predictions I can use Latin Hyper Cube (LHP) sampling and improve on the statistical results for especially fractions like P10/P90. The mean converges OK.

I guess the LHP sampling breaks some basic assumptions in Bayesian updating scheme. But then has anyone attempted alternative sampling methods like LHC for Bayesian updating?

1 Like

Have you considered Sequential Monte Carlo (which has implementations in PyMC)? I assume that you are using LHC to speed up sampling by handling each dimension in your parameter space independently. I think SMC does the same.

Separately, I am curious to learn how you are fitting LHC into the Bayesian framework if you are willing to share more about your project.

The thing is that I do not use LHC for parameter estimation, as it’s dubious in relation to Bayesian updating. The thinking is that I would cover the parameter space more efficiently.

The question is whether this is a futile approach in the Bayesian context. My thinking is that this group have some really strong people. Someone must have tried this before.

I dug this old thread up where the OP includes code that appears to use LHC sampling. Maybe that provides some directions?