Hi, all.
I have a custom likelihood and define four priors:
reaction_9 = pm.Uniform(‘reaction_parameters9’, lower=-1, upper=1,initval=0)
Hm = pm.Uniform(‘H(m)’, lower=-1, upper=1,initval=0)
O__x = pm.Uniform(“O’'(ox)”, lower=-1, upper=1,initval=0)
Ox = pm.Uniform(‘Ox’, lower=-1, upper=1,initval=0)
Then I run:
trace = pm.sample(1000,step=pm.Metropolis())
pymc3 or pymc5 /pm.Metropolis() or pm.NUTS().Results are roughly the same:
My questions are:
1.Why my posterior distributions fluctuate so much? Is there any method to smooth them?
2.When I use pm.NUTS() to sample, I define the gradient of log_likelihood_function like this:
def der_log_likelihood(theta, data): def lnlike(values): return current(values, data) eps = np.sqrt(np.finfo(float).eps) grads = scipy.optimize.approx_fprime(theta[0], lnlike, eps * np.ones(len(theta))) return grads
Is it ok?
3.My custom likelihood function contains for loops:
for i in range(len(test_potential)): test_potential[i] = test_potential[i] -0.5507 logp = logp-len(data) * np.log(np.sqrt(2.0 * np.pi) * sigma) for k in range(len(test_potential)): Energy = test_potential[k] ......
When I use pm.NUTS() to sample, it’s too slow(8000draws cost 15+hours).Should I change for loops into theano.scan or wrap variables or something else to make sample faster?
4.I define the likelihood function like this:
logp += -np.sum((data[k]1000 - curr0.1)** 2.0) / (2.0 * sigma ** 2.0)
Suppose I have k observation data points(array obs) in different condition, and I get k simulation data points(array curr) from my model.
How to directly define the likelihood function?
I tried:
like = pm.Normal(‘like’,mu=curr, sigma=1, observed_data=obs)
It seems infeasible.
Thanks for any answer. It will be really helpful for me.