But when I run this script then it tells me: “ElemwiseCategorical is deprecated, switch to CategoricalGibbsMetropolis”.
When I do that it tells me that I cannot use DiscreteUniform, but have to use either a binary or categorical distribution, so I switch to Bernoulli. The changes I made are the following:
from: ch12_model_index = pm.DiscreteUniform(‘model_index’, lower=0, upper=1)
to: ch12_model_index = pm.Bernoulli(‘model_index’, p=0.5)
from: ch12_step = pm.ElemwiseCategorical(vars=[ch12_model_index],values=[0,1])
to: ch12_step = pm.CategoricalGibbsMetropolis(vars=[ch12_model_index]) #, order=[0,1]
I am running the sampling process with 4 chains:
ch12_trace = pm.sample(4000, step=ch12_step, random_seed=42, chains=4, tune=1000) #, nuts_kwargs=dict(target_accept=.85)
Now I run into the problem that the model_index is fixed at 1 and I don’t get a “model comparison” solution (like mean: 0.26 before). Even if I keep the ElemwiseCategorical and only switch from DiscreteUniform to Bernoulli I get several indicators for the model not converging, e.g. Rhat away from 1.0 and n_eff less than 200.
How would I correctly replace the deprecated ElemwiseCategorical with CategoricalGibbsMetropolis here?
And in general: are there somewhere more examples of how to correctly do model comparisons with pymc3?
Thanks a lot! Any hints in any direction are very welcome!
Christian
Are there some examples on how to rewrite the model into a mixture model? And in general: are there somewhere more examples on how to do model comparison (which requires some latent discrete node) with pymc3?
Actual this was always one of the reasons why I prefer pymc3 over stan, because pymc3 can deal with discrete nodes.
LOO is Leave One Out Cross Validation and WAIC is Watanabe–Akaike Information Criterion, correct?
Then you’re basically saying that model bayesian model comparison via a hierarchical model is “out”? Is this because of its technical difficulties (like my problem) or some fundamental theoretical reason?
Thanks!
Christian
This is basically answering my question even more directly on how to get to the model marginal probabilities and then to the bayes factors. Great blog posts!!
By the way: at the bottom of that post you say: “I guess it is time to double check the marginal likelihood estimation in SMC.” Did you find the root cause why the different methods deliver differing results? Just out of couriosity. My core question is answered now.
Also, be careful what you are wishing for when you are using Bayes Factor. I remain skeptical of the accuracy in terms of computing it, and its usefulness.