Are you talking about something like this:
with pm.Model():
p1 = pm.Dirichlet("p1", a1)
p2 = pm.Dirichlet("p2", a2)
p3 = pm.Dirichlet("p3", a3)
sigma = pm.HalfNormal("sigma", 1)
pm.Normal("likelihood", A[0]*p1 + A[1]*p2 + A[2]*p3, sigma, observed=B)
In this case a1,a2,a3 represent the concentration parameters of distr1, distr2, distr3 which sets the priors for your vectors p1,p2,p3. p1,p2,p3 are then optimized within this constraint such that the linear condition is satisfied with some noise. I don’t quite understand what you mean by “shift the sampling”, but Bayesian method is on some sense doing something like that, finding best p1, p2, p3 given the “Dirichlet constraints”. Or am I misunderstanding this question?