So I am trying to get my head around how discrete Bayes Nets (sometimes called Belief Networks) relate to the kind of Bayesian Networks used all the time in PyMC3/STAN/etc. Here’s a concrete example:

This can be implemented in pomegranate (just one of the relevant Python packages) as:

```
import pomegranate as pg
smokeD = pg.DiscreteDistribution({'yes': 0.25, 'no': 0.75})
covidD = pg.DiscreteDistribution({'yes': 0.1, 'no': 0.9})
hospitalD = pg.ConditionalProbabilityTable(
[['yes', 'yes', 'yes', 0.9], ['yes', 'yes', 'no', 0.1],
['yes', 'no', 'yes', 0.1], ['yes', 'no', 'no', 0.9],
['no', 'yes', 'yes', 0.9], ['no', 'yes', 'no', 0.1],
['no', 'no', 'yes', 0.01], ['no', 'no', 'no', 0.99]],
[smokeD, covidD])
smoke = pg.Node(smokeD, name="smokeD")
covid = pg.Node(covidD, name="covidD")
hospital = pg.Node(hospitalD, name="hospitalD")
model = pg.BayesianNetwork("Covid Collider")
model.add_states(smoke, covid, hospital)
model.add_edge(smoke, hospital)
model.add_edge(covid, hospital)
model.bake()
```

You could then calculate `P(covid|smoking, hospital) = 0.5`

with

```
model.predict_proba({'smokeD': 'yes', 'hospitalD': 'yes'})
```

and `P(covid|¬smoking, hospital)=0.91`

with

```
model.predict_proba({'smokeD': 'no', 'hospitalD': 'yes'})
```

This is interesting as it demonstrates collider bias - it looks as if smoking has a protective effect of getting COVID if you condition upon being in hospital. See Why most studies into COVID19 risk factors may be producing flawed conclusions for a really interesting preprint on this.

## Q1: Is there anything fundamentally different from these kinds of networks compared to the typical models used in PyMC3?

These discrete Bayes Nets are often discussed in relation to causal inference. Is there anything fundamentally different about networks of discrete variables that allows causal claims to be made which can’t be done in continuous or hybrid discrete + continuous networks? I assume the answer is no, but would love to know if that’s not the case, or if it’s still debated.

These Bayes Net / Belief Network packages allow you to specify the joint distribution then make multiple conditional distribution queries. But PyMC models are always conditional probability distributions (?), conditioned on the data. And so if you want to make multiple queries, then you need to compose multiple models?

These Bayes Net / Belief Network packages do have structure learning in them - as in you provide data and the code spits out a most likely network. As far as I understand that is not a feature that PyMC aspires to?

## Q2: Can PyMC evaluate these kinds of models?

It seems that there is nothing fundamental getting in the way of PyMC being able to deal with these Bayes Nets, at least in terms of calculating conditional probabilities? Although it might not be as direct as the `pomegranate`

example above because:

- You have to have specify the appropriate discrete node type rather than just specify a list of probabilities. But that doesn’t seem like a big deal.
- PyMC models don’t have a Conditional Probability Table concept (or do they?). For binary variables I guess you can use the ‘indicator variable’ approach I guess. Not sure if you can just have a function with some
`if`

and`elif`

action to help deal with nodes with multiple discrete category levels?

I’d be very grateful to anyone who can confirm I’m on the right track or to correct where I might be going wrong in my understanding. It could be very useful in general for a bit of a discussion to highlight the boundaries of PyMC and where it’s best to hand over to other related packages.