(Total PyMC3 newbie here, so my question may be completely wrongheaded, in which case I’m happy to be redirected!)
I have a model that I previously used with
curve_fit, which I’m converting to a probabilistic model. Most of it is straightforward enough - just convert parameters to RVs.
But one of the data inputs is a time series, which I’ll call
d(t). Previously, I used not simply the time series itself as input, but the time series with a distribution of lags/time shifts. Specifically, instead of just
d(t) or the time-shifted
d(t-b), the input variable was
E(d(t-c)), c ~ Poisson(b), i.e. it was a weighted average of lagged values of itself with Poisson-distributed lags, where the mean lag
b is a parameter in the optimization. I efficiently approximated this by taking as input not simply
d(t) but a vector of values
d(t), d(t-1), d(t-2)... ... d(t-k), k>>max(b), and then taking the dot-product of this vector with the vector / array-form PMF of the Poisson distribution for a given
b, obtained using
*dn = exogs c = scipy.stats.poisson(b) dist = c.pmf(np.array(range(0,len(dn)), dtype=float)) e_dn = np.dot(dist, dn)
I am not sure both conceptually and practically how to achieve something similar in
PyMC3. I have specified a uniform prior for
b; can I access the Poisson PMF for
Poisson(b) so I can take a dot product as before? Is there a better way to do that than manually, e.g.
dist = [(b ** k) * (pm.math.exp(-b)) / tt.gamma(k+1) for k in range(0, len(dn))]
Or am I barking up the wrong tree completely, and there’s a much better, properly probabilistic way to achieve a similar Poisson-distributed lag effect?