I think the question is about poisson jump processes, which are a class of stochastic diffusion model used in finance. Particularly a “compensated Poisson martingale” show in equation (20.7).
You can write your data generating process in PyMC like this:
import pymc as pm
from pymc.pytensorf import collect_default_updates
import pytensor.tensor as pt
import pytensor
with pm.Model() as m:
def jump_process_dist(x0, p, lower, upper, size=None):
def step(mu_tm1, p, lower, upper):
switch = pm.Bernoulli.dist(p=p)
jump = pm.Uniform.dist(lower=lower, upper=upper)
mu_t = switch * jump + (1 - switch) * mu_tm1
return mu_t, collect_default_updates(mu_t)
outputs, updates = pytensor.scan(step,
outputs_info=[x0],
non_sequences=[p, lower, upper],
n_steps=n)
return outputs
# x0 = pm.Uniform('x0', lower=0, upper=500)
x0 = pm.DiracDelta('x0', 100.0)
p = 0.01
lower = 0
upper = 500
mu = pm.CustomDist('mu', x0, p, lower, upper, dist=jump_process_dist)
obs = pm.Poisson('obs', mu=mu, observed=counts)
Which produces samples like this:
fig, ax = plt.subplots(2, 5, figsize=(14, 6), dpi=144, sharex=True, sharey=True)
for i, axis in enumerate(fig.axes):
axis.scatter(np.arange(n), pm.draw(obs))
[spine.set_visible(False) for spine in axis.spines.values()]
axis.grid(ls='--', lw=0.5)
axis.set(title=f'Sample {i}')
fig.tight_layout()
plt.show()
But unfortunately you can’t sample from this, because the logic that transforms a generative graph to a log probability of data can’t handle the bernoulli mixture inside the step function (I think – @ricardoV94 correct me if I’m wrong if it’s actually going wrong somewhere else).
If you can write down a closed-form expression for the logp yourself, you can provide it to the pm.CustomDist and solve this problem. I believe such an expression exists, but I don’t know it (helpful, I know)
