Dynamic Bayesian Network Inference

I’m trying to use a template model representation for a discrete-time dynamic bayesian network. The model is much more complicated, but here’s a simplified example:

Essentially, each time step has an identical structure, but then there’s some dependence at each time step on the previous one (but only on the previous one!).

My initial thoughts are to use a custom model, but I’m not sure how to repeat it, and in particular I’m not sure how to make it repeat and also be efficient. Any thoughts on implementing such a thing?

I’ve implemented the first step using a custom model, but unsure how to proceed.

class TemplateModel(pm.Model):
    def __init__(self, mean=0, sd=1, name='', model=None):
        super(TemplateModel, self).__init__(name, model)
        v1 = pm.Normal('v1', mu=5, sd=sd)
        v2 = pm.Normal('v2', mu=v1, sd=sd)

with pm.Model() as m:
    m1 = TemplateModel(1,2,name='m1')
    m2 = TemplateModel(2,3,name='m2')
    m3 = pm.Deterministic('m3', 0.9*m1.v2+m2.v2)
    trace = pm.sample(2000, njobs=4)
pm.traceplot(trace)
1 Like

Did you succeed in creating DBN Inference using pymc3?
Do you have an example available?

I don’t believe I ever got it working, no.

I’ve read some google groups and it was stated that it could be done, but i’ve seen no code.
Have you tried any other library for that manner?

The example, at least, looks really similar to an AR1 time series. You could maybe try to write it down as one?

You could try something on these lines:

with pm.Model():
    n = len(observed)
    v1 = pm.Normal('v1', 0, 1, shape=(n,))
    v2 = pm.Normal('v2', v1, 1, shape=(n,))
    p = pm.Normal('p', 0, 1)
    v3 = pm.AR('v3', [v2[1:], p*tt.ones(n - 1)], 1, init=v2[0], observed=observed)

You would need to parametrize the model better to avoid divergences, but as a toy example to build on, it could maybe work.