Hello, I was trying to reproduce a model from a research paper, long story short, in their model, they have a 2D matrix denoted as H, where they said each entry in H has a prior distribution beta ~ (u,sigma), and the u and sigma can be determined by the domain knowledge. The important point is each entry follows its own beta distribution, so H_{ij} ~ Beta(u_{ij},sigma_{ij}), I believe this can be easily implemented like below:
# assuming H is 100*100
mu = np.empty((100,100))
sigma = np.empty((100,100))
H = pm.Beta(mu=mu,sigma=sigma,shapes=(100,100))
But following that, they said each column of H must sum up to one, because of that, each column of matrix H has a prior distribution of Dirichlet(alpha), where alpha is a vector of length 100. Now it seems that I have to write something like below:
H = pm.Dirichlet(alpha=np.empty((100,100)),shape=(100,100))
My question is, is it valid in Pymc to define a stochastic variable that is generated from two different distributions at the same time as I haven’t seen such things in the tutorial? I apologize if my question is too naive as I am just learning the bayesian modeling and am happy to further clarify my question.
Many thanks in advance,
Frank