Im trying to evaluate A/B test results following this What is A/B testing? — PyMC3 3.11.4 documentation
I have been strugling with the simulation of Value conversions. Got a lot of SamplingError. I thought that was from the gamma unadapted choice of priors. But i have found out that even for simple Bernouilli conversions evaluation it failed. Whatever the prior’s choices for the Beta passed in the Binomial i had the same error. I also tried to play with the number of chains and cores but still got same error.
I have finaly found out that with toy data with small trials it works perfectly fine. As soon as i got high N passed it fails. I even used the small example from the sample method in the pymc doc (pymc.sampling — PyMC dev documentation). I still observe the same behavior. I tried pymc3 3.11.4 and 3.11.2 with same results.
Any ideas whats going wrong ?
Thanks all !
# example for pymc3 documentation
import pymc3 as pm
# Using a ratio high makes the sampling fail
ratio = 1000
# With small ratio works fine
# ratio = 1
n = 100*ratio
h = 61*ratio
alpha = 2
beta = 2
with pm.Model() as model: # context management
p = pm.Beta("p", alpha=alpha, beta=beta)
y = pm.Binomial("y", n=n, p=p, observed=h)
trace = pm.sample()
---------------------------------------------------------------------------
SamplingError Traceback (most recent call last)
/tmp/ipykernel_30286/3176206272.py in <module>
9 p = pm.Beta("p", alpha=alpha, beta=beta)
10 y = pm.Binomial("y", n=n, p=p, observed=h)
---> 11 trace = pm.sample()
/data/anaconda/envs/py37_new/lib/python3.7/site-packages/pymc3/sampling.py in sample(draws, step, init, n_init, start, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, callback, jitter_max_retries, return_inferencedata, idata_kwargs, mp_ctx, pickle_backend, **kwargs)
426 start = deepcopy(start)
427 if start is None:
--> 428 check_start_vals(model.test_point, model)
429 else:
430 if isinstance(start, dict):
/data/anaconda/envs/py37_new/lib/python3.7/site-packages/pymc3/util.py in check_start_vals(start, model)
238 "Initial evaluation of model at starting point failed!\n"
239 "Starting values:\n{}\n\n"
--> 240 "Initial evaluation results:\n{}".format(elem, str(initial_eval))
241 )
242
SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'p_logodds__': array(0., dtype=float32)}
Initial evaluation results:
p_logodds__ -0.98
y -inf
Name: Log-probability of test_point, dtype: float64