Hello, I am using PyMC v5.11.0. I am interested in performing an inference on the variables a, b, and an ensemble of {c} and {d} variables. That is, 2N + 2 variables in total. I have a uniform prior on all variables, except the {c} ones.
dimension_names = ['Dim ’ + str(i) for i in range(N)]
with pm.Model(coords={“vec1”: dimension_names,
“vec2”: dimension_names}) as model:a = pm.Uniform('a', lower=0.1, upper=1) b = pm.Uniform('b', lower=-2, upper=1) c = pm.LogNormal('c', mu=-3.7, sigma=0.6, dims="vec1") d = pm.Uniform('d', lower=-0.005, upper=0.005, dims="vec2") mean1 = f(a, b, c, d) mean2 = g(c, d) mean3 = h(a, b, d) obs1 = pm.MvNormal('obs1', mu=mean1, cov=cov1, observed=d1) obs2 = pm.MvNormal('obs2', mu=mean2, cov=cov2, observed=d2) obs3 = pm.MvNormal('obs3', mu=mean3, cov=cov3, observed=d3) trace = pm.fit(300) trace_samples = trace.sample(3000)
f, g, h represent some functions.
However, the {c} and {d} posteriors I recover from trace_samples
appear to be different samples from the same distribution for all N variables. My expectation is that each of {c}, {d} is centered at different values. At the same time, the posteriors on a, b, are Gaussians centered on the average of the uniform prior interval. I attach a screenshot of the results. I am using fit
instead of sample
to speed up the inference / fit according to this.
I think that the former problem points to an implementation issue, so I was wondering what I may be doing wrong. I implemented the above according to this.