@ricardoV94 Thanks for catching that. Since I am exponentiating the noising process now, I think it now would be safe to switch over to a normally-distributed γ? With the exponentiated version I now need a negative \kappa. So this seems more intuitive to me.
@cluhmann I have tried increasing the variance of \delta, with a range of values up to sigma=1. Nothing seems to work, the bad initial energy error is thrown. Because of the exponentiation, I have to keep \delta pretty small.
Latest version, still thowing errors:
xx = np.arange(1,3501,20)
yy_real = xx/(100+xx)
γ_mean = np.log(0.2)
δ_mean = -0.0004
a = lambda x: np.power(np.e, δ_mean*x + γ_mean)
ee = np.random.normal(0, a(xx))
yy = yy_real + ee
with pm.Model() as model_vv:
γ = pm.Normal('γ', mu=-1.5, sigma = 0.5) ## analagous to initial spread of variance (take natural log)
#γ = pm.Gamma('γ', mu=1.5, sigma = 0.5) ## gamma version
δ = pm.Normal('δ', mu=-0.0004, sigma=0.05) ## slope, rate of tightening of variance
κ = pm.Normal('κ', mu=100, sigma = 10) ## same old k
μ = pm.Deterministic('μ', xx/(xx+κ))
ε = pm.Deterministic('ε', np.power(np.e, γ + δ*xx))
#ε = pm.Deterministic('ε', np.power(np.e, -γ + δ*xx)) ## use with gamma, neg coefficient
y_pred = pm.Normal('y_pred', mu=μ, sd=ε, observed=yy)
trace_g = pm.sample(2000, tune=1000)