Negative binomial returning -inf logp

Thanks for the help. I have taken a closer look at the logp values returned using the following code:

test_values = np.arange(32752, 32772, 1);
for v in test_values:
	model = pm.Model();
	with model:
		mu = pm.Normal("mu", mu=0, sd=10);
		Y = pm.NegativeBinomial("Y", mu=2**mu, alpha=10, observed=v);
			
		for RV in model.basic_RVs:
			if RV.name == "Y":
				print(v, RV.logp(model.test_point));
  1. Comparing RV “Y” and “mu”, it is always “Y” that has problematic values. “mu” is for this particular example -3.2 for all of the test values.
  2. Changing the hyperparameters of “mu” does change the initial logp value of “Y”, but only for observed values which returns a valid logp for “Y” (-inf is still -inf). Changing alpha also does not solve the problem.
  3. I checked the logp of a large range of numbers and which number returns -inf seems quite random. Below are the logp values where I increase the observed value by one each iteration.
  • For observed values between 32752 to 32757: Logp about -78456
  • For observed values between 32758 to 32766: Logp positive infinity
  • For 32767: nan logp
  • For observed values 32768 and larger: negative infinity

For values much larger than 32768 the logp will become valid again, and for even larger values it will become -inf again. This patterns of valid and non-valid logp values is repeated for a quite large range of numbers.

Maybe a rounding error could be the source of this problem as it seems quite random?