Hi @ricardoV94 ,
the original logp was
def logp(value, mu, r, k):
support = np.arange(1, 10000)
norm = np.exp((mu - 1 - r * support) * np.log(support) + r * (1+k) * support).sum()
log_norm = at.log(norm)
return (mu - 1 - r * value) * np.log(value) + r * (1+k) * value - log_norm
to convolved with the other distribution I wrote this:
def convolution(values, mu, r, k, a, b):
unique, counts = at.extra_ops.unique(values, return_counts=True)
support = at.arange(1, 10000)
unnormlogp = (mu - 1 - r * support) * at.log(support) + r * (1+k) * support
norm = at.exp(unnormlogp).sum()
logp = unnormlogp - at.log(norm)
logpbb = pymc.BetaBinomial.logp(unique[:, None], support, 1, 1)
logptot = logpbb + logp
return (at.log(at.exp(logptot).sum(axis=1)) * counts).sum()
I had to use at.extra_ops.unique to cut the pymc.BetaBinomial.logp input size by an order of magnitude and speed up the calculation (still quite slow). This approach I don’t know why produces chains that output the same value, always.
The alternative I thought is to use the following model:
with pymc.Model():
mu = pymc.Normal("mu", mu=-1, sigma=1)
r = pymc.Exponential("r", lam=1)
k = pymc.Gamma("k", alpha=np.log(samples.max()), beta=1)
abundance = pymc.DensityDist("ab", mu, r, k, logp=logp, initval=2*samples, dtype="int64", shape=samples.size)
alpha=pymc.Gamma("alpha",1000, 1)
beta=pymc.Gamma("beta",1000, 1)
pymc.BetaBinomial("obs", alpha=alpha, beta=beta, n=abundance, observed=samples)
trace = pymc.sample(...)
but absolutely nothing converges, most of the chains drift
Thanks for the help.