Home > An introduction to decision theory > Causal decision theory

## Causal decision theory

This is part of the sequence, “An introduction to decision theory”. It is not designed to make sense as  a stand alone post.

The Autoverse was a ‘toy’ universe, a computer model which obeyed its own simplified ‘laws of physics’ – laws far easier to deal with mathematically than the equations of real-world quantum mechanics. Atoms could exist in this stylized universe, but they were subtly different from their real-world counterparts; the Autoverse was no more a faithful simulation of the real world than the game of chess was a faithful simulation of medieval warfare. It was far more insidious than chess, though, in the eyes of many real world chemists. The false chemistry it supported was too rich, too complex, too seductive by far.

Greg Egan, Permutation City

Imagine you’ve created a simulation rich enough that evolution can take place within it. But the environment is not a normal environment – it’s an environment designed to evolve new decision theories. The selective pressures are different decision scenarios which exist in different combinations in different parts of the world. On Earth, selective advantage comes about through the ability to reproduce. What could fill the same role in this simulation? The traditional answer in philosophy is rationality. Decision theory is intended to be a theory not of how an agent should decide so as to thrive but so as to be rational. Whether there should be a distinction between the two and how we should think about rationality will be the topic of a future section of this sequence but in this section I intend to discuss the standard debate of decision theory. The background.

So what determines rationality? Well, traditional philosophy maycome up with criteria but they’re happy to abandon them if they seem to fail to capture rationality in some circumstance. So that means that underlying the explicit criteria is an implicit appeal to intuition. We will know rationality when we see it (it may not come as a surpise then that which decision is considered to be rational is debated for many decision scenarios).

These are issues for later though. For now, imagine that our intuitions are more coherent and have been fed to the simulation. An agent that decides rationally finds that they are more likely to reproduce.

Evolution can’t survive without variation and mutation. So what is varying and mutating in this simulation? Take the original formula we discussed for expected utility:

$Expected \ Utility (Decision) =\sum_{i}Probability(WorldState_{i})\times Utility(WorldState_{i}\ \wedge \ Decision )$

The utility section of this formula is fairly uncontroversial. Different decision theories involves different probabilities playing a role in this formula. So that is what will vary and mutate in the simulation: the probability term used here – different probabilities will be used which capture different relationships between the decision and the world state.

The simulation will start with naive decision theory which, having at least some coherence, will survive and spread widely enough for mutations to begin to develop. Many of these will be evolutionary dead ends. Maybe one will refer not to the probability of the world state alone but instead the probability of the decision alone – the formula will now ask what the probability is that the agent will decide a certain way. But the formula is designed to determine the decision the agent makes so each time it seems to reach a decision a new probability will be determined for the decision and the formula will have to be recalculated. Even if it eventually finds an equilibrium point, there’s no reason to believe that this point will identify the rational decision. Such mutations will rapidly die out.

Others will be more useful. Eventually, the evidential decision theory that we discussed in the last post will come into being. Presuming the environment it develops in is one which includes decision scenarios where the world state depends on the decision, it will take over from naive decision theory in these areas.

Elsewhere, however, another decision theory will evolve to deal with this same problem: causal decision theory. While evidential decision theory made use of conditional probabilities, causal decision theory makes you of the probability of a particular type of conditionals. In other words, it’s probability term looks something like this (traditionally a box with an arrow coming out of it represents the relationship but WordPress doesn’t render that symbol so I’m using the standard arrow):

$Probability(Decision \rightarrow WorldState_{i})$

This represents the probability of the subjunctive conditional relating the decision to the world state. A subjunctive conditional can also be thought of as a counterfactual conditional capturing the statement “If I were to make this decision then the world would be in this state”. Causal decision theory then considers the probability of this statement being true.

Another way of thinking of this is that while evidential decision theory asks how the decision and the world state are correlated, causal decision theory is interested in only causation between the decision and the world state. This will also resolve the difficulties faced by naive decision theory. If an agent is deciding whether to enter an air raid shelter during a bombing raid, they will note that doing so will cause them to be more likely to survive and so they will take shelter. If naive decision theory, on the other hand, divides the world into the states where it survives and where it dies it will fail to realise that it’s decision can change the probabilities of the world being each of these ways. So it won’t bother with the hassle of sheltering.

So causal decision theory also begins to establish a foothold in the world. The next post will ask the question: what happens when causal and evidential decision theory meet in the same area of the simulation?