I’m using a Markov Chain (observed state) model as defined here:
class Markov1stOrder(pm.Categorical): """ Hidden Markov Model States Adapted from https://github.com/hstrey/Hidden-Markov-Models-pymc3/blob/master/Multi-State%20HMM.ipynb Parameters ---------- p_trans : tensor transition probability shape = (N_states,N_states) p_t0 : tensor starting probability shape = (N_states) """ def __init__(self, p_t0=None, p_trans=None, *args, **kwargs): super(pm.Categorical, self).__init__(*args, **kwargs) self.p_trans = p_trans # Tran probabilities (1st order trans probabilities) self.p_t0 = p_t0 # Initial probability for t=0 (0th order trans probabilities) self.mean = 0. self.mode = tt.cast(0, dtype='int64') def logp(self, x): """How to sample the distribution given observed points (x)""" p_trans = self.p_trans p_t0 = self.p_t0 # We need a the probability of the next state given the current state P(x_t | x_t-1) # Index into trans matrix to generate categorical probabilities for the next point which is based on the # previous point (except the last) p_t = p_trans[x[:-1]] # x_i is what we are trying to predict x_i = x[1:] # The first point is not transitioned too, and not sampled from trans distribution likelihood_t = pm.Categorical.dist(p_t).logp(x_i) # Likelihood of transitioned to points (dynamic process) return pm.Categorical.dist(p_t0).logp(x) + tt.sum(likelihood_t) # Static first point + dynamic next points
I’d like to test if my model can simulate data and then recover the generating parameters, i.e. i’d like to set the parameters and call
pm.sample_prior_predictive(), then reinitialize the model and recover the parameters.
To sample from the prior, i need to implement the
random() function right?
I was following an example here, and came up with this.
def random(self, point=None, size=None): p_trans = self.p_trans # Tran probabilities (1st order trans probabilities) p_t0 = self.p_t0 p_t0_, p_trans_ = pm.distributions.multivariate.draw_values([p_t0, p_trans], point=point) def single_dim_gen_sample(p_t0, p_trans, size): x = [pm.distributions.dist_math.random_choice(p=p_t0, size=size)] for _ in range(size-1): x_i = pm.distributions.dist_math.random_choice(p=p_trans[x[-1], :], size=size) x.append(x_i) return x return pm.distributions.multivariate.generate_samples(single_dim_gen_sample, p_t0=p_t0_, p_trans=p_trans_, size=size, broadcast_shape=(size,))
I’m sure this is not the best implementation, but i don’t see how to vectorize it as each point in a Markov chain depends on the last…
My specific issue is:
I’m not to sure how the size argument works… is the number of samples you want to draw right?
One of my parameters (p_t0, where p_t0.shape=(5,)) get sampled by
draw_values to be a (5,500) matrix, the other parameter seems to stay the same size as it was at initialization (5,5). Why? Why is it not broadcast to (5,5,500)?
Additions and comments to this approach are appreciated!