Hello, I’m very new to PyMC so apologies that this is most likely a silly question. I’m trying to create a simple markov chain using aesara scan. I am getting an error I don’t really understand about missing inputs. Here is a minimal example:
import aesara
import pymc as pm
k = 10
with pm.Model() as markov_chain:
transition_probs = pm.Uniform('transition_probs', lower = 0 , upper = 1, shape = 2)
initial_state = pm.Bernoulli('initial_state', p = 0.5)
def transition(previous_state):
p = transition_probs[previous_state]
return pm.Bernoulli('chain', p = p)
output, updates = aesara.scan(fn=transition,
outputs_info=dict(initial = initial_state),
n_steps=k)
with markov_chain:
trace = pm.sample_prior_predictive(100)
Thanks so much for your replies both! On your point about RandomVariables being messy with scan - is there another approach to a markov chain which you think would be less messy? (I have tried using normal python loops but I have read on other posts that this will be much slower)
Also, when I replace the sampling with .sample( to get the posterior (which I assume would be the same in this case but not in the more complicated cases I am hoping to build up to), I get this warning UserWarning: Moment not defined for variable and an error too.
The code needed is what is messy/ugly (subjective opinion), but the approach should be perfectly fine.
The moment is not an error, just a warning. You can set an initval when you call model.register_rv, if you want to fix the initial point to a specific value. If you are okay with a draw from the prior, you can set initval="prior", to silence the warning. If your scan variable is observed, you won’t need to, in that case you pass the observed data as data when calling model.register_rv.
I think the error you see may be coming from the sampler proposing values out of bounds for the bernoulli variable, like p=transition_probs[2]. Again this is only an issue if you are not conditioning on observed data, but actually sampling the scan sequence.
If you specify a better sampler manually, it seems to work:
import numpy as np
import aesara
import pymc as pm
k = 10
with pm.Model() as markov_chain:
transition_probs = pm.Uniform('transition_probs', lower=0, upper=1, shape = 2)
initial_state = pm.Bernoulli('initial_state', p = 0.5)
def transition(previous_state, transition_probs, old_rng):
p = transition_probs[previous_state]
next_rng, next_state = pm.Bernoulli.dist(p = p, rng=old_rng).owner.outputs
return next_state, {old_rng: next_rng}
rng = aesara.shared(np.random.default_rng())
mc_chain, updates = aesara.scan(fn=transition,
outputs_info=dict(initial = initial_state),
non_sequences=[transition_probs, rng],
n_steps=k)
assert updates
markov_chain.register_rv(mc_chain, name="mc_chain", initval="prior")
with markov_chain:
pm.sample(chains=1, step=pm.BinaryMetropolis([mc_chain]))