A causal calculus: Processing causal information
This is post 2 in a sequence exploring formalisations of causality entitled “Reasoning with causality”
The first post is “Causality and graphical methods”.
In the previous post, I introduced Pearl’s graphical method for discussing causality. In this post we will attempt to answer the question: How can we process causal information that is given to us.
A definition of causality
The previous post introduced the idea of surgery – that is, reaching in and modifying the value of a node in a causal graph. Pearl defines causality in terms of this. He says that A can be said to cause B if performing surgery on A (ie. changing the value of A) can change the value of B. To put it another way, if we specify A surgically via a new equation, A causes B if the value of B relies on this new equation.
So take the graph from the previous post. Y can be said to cause Z in this graph because if we reached in and changed the value of Y (while leaving X the same), the value of Z would change because Z is simply equal to 2Y.
A causality calculus
Having just established a graph, rather than equation, based view of causality, Pearl now points out the benefits of being able to discuss causality in a formal language. So just as we have the Boolean Algebra for deductive logic and algebras to discuss probability, we now need a causal calculus.
He sets out to establish such a calculus by introducing a new operator, do(), to standard probability theory. Do is designed to capture causal ideas. So take the wet grass example (a robot might conclude from, “If the grass is wet then it rained” and “if I break this bottle, the grass is wet” that “if I break this bottle, then it rained”). The “do()” operator should allow us to differentiate the grass being wet and one undertaking (or doing) an action that makes the grass wet. It should then be able to determine that undertaking an action that makes the grass wet does not change the probability that it rained.
How do we specify the do() operator so that this occurs? Pearl proposes three rules that will allow do() operators to be manipulated in useful ways.
The first of those rules is called “Ignoring observations” and is expressed as:
Basically what this says is that the probability of y is the same given z, w and the undertaking of the action of x as if you were simply given w and the undertaking of the action of x. Obviously this won’t be true in all situations and so there is a precondition that must be met before rule 1 can be used:
Basically, this means that rule 1 can only be applied when Y and Z are conditionally independent given X and W. Which is to say when: Knowledge of Z will make no difference to the probability of Y if you already have X and W. This means that rule 1 basically allows you to ignore an irrelevant observation.
The second rule is called “Action/Observation exchange”:
This basically says that the probability of y is the same given w and the undertaking of x and z as it is if given w and z and the undertaking of x. Once again, there is a precondition:
That is, Y and Z must be conditionally independent given X and W. Which is to say that if you know X and W, then Z doesn’t change the probability of Y. So rule 2 basically allows you to use this rule of conditional probabilities to swap one action (a do() statement) for an observation of the same thing.
The third rule is called “Ignoring actions”:
Which is basically the equivalent of the first rule but for actions: So while the first rule allowed you to ignore an irrelevant observation, this rule allows you to ignore an irrelevant action. The precondition is:
Which is to say, the rule can be used when Z provides no additional information about Y given X and W.
Conclusion
The do() operator is Pearl’s way of developing a causal calculus. The graphical method is his way of defining causality. The next post will bring these together and hopefully explain them both more clearly by following an example that Pearl provides in Causality as to how these can be used to do causal reasoning.
The next post is “Reasoning with causality: an example”

September 2, 2010 at 7:38 amReasoning with causality: An example « Formalised Thinking

August 26, 2010 at 11:17 amPearl’s formalisation of causality: Sequence index « Formalised Thinking

August 26, 2010 at 11:15 amCausality and graphical methods « Formalised Thinking