# Simple markov chain with aesara scan

Hello, I’m very new to PyMC so apologies that this is most likely a silly question. I’m trying to create a simple markov chain using aesara scan. I am getting an error I don’t really understand about missing inputs. Here is a minimal example:

``````import aesara
import pymc as pm

k = 10

with pm.Model() as markov_chain:

transition_probs = pm.Uniform('transition_probs', lower = 0 , upper = 1, shape = 2)

initial_state = pm.Bernoulli('initial_state', p = 0.5)

def transition(previous_state):
p = transition_probs[previous_state]
return pm.Bernoulli('chain', p = p)

outputs_info=dict(initial = initial_state),
n_steps=k)

with markov_chain:
trace = pm.sample_prior_predictive(100)
``````

Thanks for any suggestions!

You should pass `transition_probs` as input to scan, because Aesara scan basically cannot handle closure of a graph tensor:

``````with pm.Model() as markov_chain:
...
def transition(previous_state, transition_probs):
p = transition_probs[previous_state]
return pm.Bernoulli.dist(p = p)

outputs_info=[initial_state],
non_sequences=[transition_probs],
n_steps=k)
markov_chain.register_rv(output, name="mc_chain")
``````
1 Like

You cannot create PyMC variables inside the scan step function. You can however, register the whole scan sequence as an RV itself, manually:

``````import numpy as np
import aesara
import pymc as pm

k = 10

with pm.Model() as markov_chain:

transition_probs = pm.Uniform('transition_probs', lower=0, upper=1, shape = 2)
initial_state = pm.Bernoulli('initial_state', p = 0.5)

def transition(previous_state, transition_probs, old_rng):
p = transition_probs[previous_state]
next_rng, next_state = pm.Bernoulli.dist(p = p, rng=old_rng).owner.outputs
return next_state, {old_rng: next_rng}

rng = aesara.shared(np.random.default_rng())
outputs_info=dict(initial = initial_state),
non_sequences=[transition_probs, rng],
n_steps=k)
markov_chain.register_rv(output, name="p_chain")

with markov_chain:
``````

Don’t forget to specify updates and pass them to the sampling function, or the scan won’t be seeded properly across draws.

Unfortunately RandomVariables are a bit messy with scan

Haha @junpenglao beat me to it! Don’t forget the updates!

Oh so Aesara scan could indeed handle closure why I remember I got some error before doing that.

I think the problem appeared later with the gradient of the logp

Haha ok, so it is still better to write all the input explicitly (which make sense)

Thanks so much for your replies both! On your point about RandomVariables being messy with scan - is there another approach to a markov chain which you think would be less messy? (I have tried using normal python loops but I have read on other posts that this will be much slower)

Also, when I replace the sampling with `.sample(` to get the posterior (which I assume would be the same in this case but not in the more complicated cases I am hoping to build up to), I get this warning ` UserWarning: Moment not defined for variable` and an error too.

The code needed is what is messy/ugly (subjective opinion), but the approach should be perfectly fine.

The moment is not an error, just a warning. You can set an `initval` when you call `model.register_rv`, if you want to fix the initial point to a specific value. If you are okay with a draw from the prior, you can set `initval="prior"`, to silence the warning. If your scan variable is observed, you won’t need to, in that case you pass the observed data as `data` when calling `model.register_rv`.

For the error, I need to try it out

I think the error you see may be coming from the sampler proposing values out of bounds for the bernoulli variable, like `p=transition_probs[2]`. Again this is only an issue if you are not conditioning on observed data, but actually sampling the scan sequence.

If you specify a better sampler manually, it seems to work:

``````import numpy as np
import aesara
import pymc as pm

k = 10

with pm.Model() as markov_chain:

transition_probs = pm.Uniform('transition_probs', lower=0, upper=1, shape = 2)
initial_state = pm.Bernoulli('initial_state', p = 0.5)

def transition(previous_state, transition_probs, old_rng):
p = transition_probs[previous_state]
next_rng, next_state = pm.Bernoulli.dist(p = p, rng=old_rng).owner.outputs
return next_state, {old_rng: next_rng}

rng = aesara.shared(np.random.default_rng())