Bayesian networks and simultaneity issues(endogeneity, causal inference)

When performing causal inference, endogeneity issues can take form of omitted variables, measurement erros and simultaneity issues and one of them can cause another.

In this case i am specifically interested in simultaneity issues, namely, we are unsure if X → Y or if Y → X or we may even know that both of them occur in our dataset.

Consider e.g the following graphical model:

I heard about an interesting method to deal with this that i want to know more about.
Assume that we are interested in knowing the effect of bi_1 → bi_2
The method works as follows, create two dags(in the case above) where we remove the bidirectionality between bi_1 and bi_2 and instead just keep one of the arrows in the dags, e.g dag_1 may have bi_1 → bi_2 and dag_2 may have bi_2 → bi_1.

We then compute the relative likelihood of the dags and weight the effect of the dag we are interested in with this calculated weight.

I happened to hear about this in school as a measure of attacking simultaneity issues but i cant find any resources what so ever online. Obviously there are vast resources on bayesian networks but googling e.g “simultaneity in bayesian networks” and similar queries render nothing.

I thought this forum(my preferred bayesian modelling framework) might have someone that have experience with these scenarios so i take a shot here:

Does anyone know what this method is called and where i can find more information about it(especially in terms of countering simultaneity issues)?

Maybe @drbenvincent knows?

I don’t know enough to give a definitive answer, sorry.

Not all causal relationships are actually estimable, so something like digitty should help tell you that.

The idea of throwing various models at the data and using model comparison methods is often used under statistical approaches, but I have no idea if that same logic works under a causal approach. I’d be very interested if anyone can provide info one way or another.

In general, if there’s uncertainty about the directionality of a relationship then the ideal thing is to intervene and see what happens. But that’s not always practical.

My only other thought would be to look at the testable implications of different DAGs. I.e. the implied independences or conditional independences.