I have a Bayesian network with discrete random variables. The initial probabilities and conditional probabilities for the variables are set by experts. There are external requirements that tell us based on specific sets of evidence (not for all variables) what the inferred probabilities should be for a set of nodes. I’d like to optimize the variable parameters so that as many requirements are satisfied as possible.

What functions would you use for this purpose? Maybe find_MAP?

Example: you have A, B, C, D network, where A is the parent of B and C. B and C are the parents of D. Let’s say there is a requirement that if A=1 then D should be 1 with about 65%. I’d like to calculate the parameters of B, C (and sometimes D) so that D=1 is around 65%.

My initial thoughts are something like this (this example is not the same as above, just a similar experiment):

```
def general_cpd(*args, probabilities):
lookup_table = pytensor.shared(probabilities)
return lookup_table[args]
def pymc_test():
with pm.Model() as model:
a_probs = np.array([0.75, 0.25])
b_probs = np.array([0.35, 0.65])
a = pm.Categorical('a', p=a_probs)
b = pm.Categorical('b', p=b_probs)
conditional_example = pm.Categorical('c', p=general_cpd(a, b, probabilities=probabilities_outside))
conditional = pm.Categorical('conditional', conditional_example[a, b])
def error_function(probs_a, probs_b):
# calculate inference and return distance of expected value - this would happen using another library
error = pm.Deterministic('error', error_function(a_probs, b_probs))
observed_error = pm.Normal('observed_error', mu=error, sigma=0.01, observed=0)
map_estimate = pm.find_MAP()
```

Could this approach work or I’m way off? Probably a_probs and b_probs should also be pytensor.shared?

Another question is, how could I use find_MAP so that it excludes some variables? So in this example I only want to optimize variable ‘a’ and ‘b’, but not the conditional c. I just set the observed=something?

Thank you for any thoughts!