Intercausal Reasoning in Bayesian Networks


#1

Are there any examples for intercausal reasoning using pymc3?
For example, given this graph (or any for that matter, this is just first one coming into mind).

We can model it, have some priors, observe for example WetGrass, then we can calculate (sample) posteriors.(lets assume this is learning, model training)
But, how would you compute P(Cloudy=True | Sprinkler=True)=? and enable some interface to user.
Any continues example, not only discrete or even better combined would be great.


#2

Following the definition of conditional probability, you have:
P(A|B) = P(A and B)/P(B)
which means:
P(Cloudy=True | Sprinkler=True) = P(Cloudy=True and Sprinkler=True) / P(Sprinkler=True) = 0.1 / (0.1+0.5) = 1/6


#3

would you mind giving a code example for this problem?
i don’t think it’s that trivial to model this in pymc3…i mean, what if you have conditional probability density function instead of table, and also a few more variables… integrating (or summing) denominator for marginalising is not an option i’d say…


#4

Hmm it depends on what your question is and how much information you have at each node - the above example could be rewrite into a mixture model which capable of handling continuous densities. But it might not be the ideal solution if you have other kind of Bayesian DAG.


#5

Here’s a great example. Now, given that structure and combination of discrete and continuous probabilities, i’d like to compute for example P(C3_a<C3<C3_b | D1=0, D2=2)? How could i do that?


#6

I would set the observed of D1 and D2 to 0 and 2, do sampling, and take the trace of C3, and count how many samples are within (C3_a, C3_b).