Archive

Posts Tagged ‘Causal Decision Theory’

Causal decision theory

This is part of the sequence, “An introduction to decision theory”. It is not designed to make sense as  a stand alone post.

The Autoverse was a ‘toy’ universe, a computer model which obeyed its own simplified ‘laws of physics’ – laws far easier to deal with mathematically than the equations of real-world quantum mechanics. Atoms could exist in this stylized universe, but they were subtly different from their real-world counterparts; the Autoverse was no more a faithful simulation of the real world than the game of chess was a faithful simulation of medieval warfare. It was far more insidious than chess, though, in the eyes of many real world chemists. The false chemistry it supported was too rich, too complex, too seductive by far.

Greg Egan, Permutation City

Imagine you’ve created a simulation rich enough that evolution can take place within it. But the environment is not a normal environment – it’s an environment designed to evolve new decision theories. The selective pressures are different decision scenarios which exist in different combinations in different parts of the world. On Earth, selective advantage comes about through the ability to reproduce. What could fill the same role in this simulation? The traditional answer in philosophy is rationality. Decision theory is intended to be a theory not of how an agent should decide so as to thrive but so as to be rational. Whether there should be a distinction between the two and how we should think about rationality will be the topic of a future section of this sequence but in this section I intend to discuss the standard debate of decision theory. The background.

So what determines rationality? Well, traditional philosophy maycome up with criteria but they’re happy to abandon them if they seem to fail to capture rationality in some circumstance. So that means that underlying the explicit criteria is an implicit appeal to intuition. We will know rationality when we see it (it may not come as a surpise then that which decision is considered to be rational is debated for many decision scenarios).

These are issues for later though. For now, imagine that our intuitions are more coherent and have been fed to the simulation. An agent that decides rationally finds that they are more likely to reproduce.

Evolution can’t survive without variation and mutation. So what is varying and mutating in this simulation? Take the original formula we discussed for expected utility:

Expected \ Utility (Decision) =\sum_{i}Probability(WorldState_{i})\times Utility(WorldState_{i}\ \wedge \ Decision )

The utility section of this formula is fairly uncontroversial. Different decision theories involves different probabilities playing a role in this formula. So that is what will vary and mutate in the simulation: the probability term used here – different probabilities will be used which capture different relationships between the decision and the world state.

The simulation will start with naive decision theory which, having at least some coherence, will survive and spread widely enough for mutations to begin to develop. Many of these will be evolutionary dead ends. Maybe one will refer not to the probability of the world state alone but instead the probability of the decision alone – the formula will now ask what the probability is that the agent will decide a certain way. But the formula is designed to determine the decision the agent makes so each time it seems to reach a decision a new probability will be determined for the decision and the formula will have to be recalculated. Even if it eventually finds an equilibrium point, there’s no reason to believe that this point will identify the rational decision. Such mutations will rapidly die out.

Others will be more useful. Eventually, the evidential decision theory that we discussed in the last post will come into being. Presuming the environment it develops in is one which includes decision scenarios where the world state depends on the decision, it will take over from naive decision theory in these areas.

Elsewhere, however, another decision theory will evolve to deal with this same problem: causal decision theory. While evidential decision theory made use of conditional probabilities, causal decision theory makes you of the probability of a particular type of conditionals. In other words, it’s probability term looks something like this (traditionally a box with an arrow coming out of it represents the relationship but WordPress doesn’t render that symbol so I’m using the standard arrow):

Probability(Decision \rightarrow WorldState_{i})

This represents the probability of the subjunctive conditional relating the decision to the world state. A subjunctive conditional can also be thought of as a counterfactual conditional capturing the statement “If I were to make this decision then the world would be in this state”. Causal decision theory then considers the probability of this statement being true.

Another way of thinking of this is that while evidential decision theory asks how the decision and the world state are correlated, causal decision theory is interested in only causation between the decision and the world state. This will also resolve the difficulties faced by naive decision theory. If an agent is deciding whether to enter an air raid shelter during a bombing raid, they will note that doing so will cause them to be more likely to survive and so they will take shelter. If naive decision theory, on the other hand, divides the world into the states where it survives and where it dies it will fail to realise that it’s decision can change the probabilities of the world being each of these ways. So it won’t bother with the hassle of sheltering.

So causal decision theory also begins to establish a foothold in the world. The next post will ask the question: what happens when causal and evidential decision theory meet in the same area of the simulation?

Advertisements

Newcomb’s Problem: A problem for Causal Decision Theories

August 18, 2010 4 comments

This is part 2 of a sequence titled “Less Wrong and decision theory”

The previous post is “An introduction to decision theory”

In the previous post I introduced evidential and causal decision theories. The principle question that needs resolving with regards to these is whether using these decision theories leads to making rational decisions. The next two posts will show that both causal and evidential decision theories fail to do so and will try to set the scene so that it’s clear why there’s so much focus given on Less Wrong to developing new decision theories.

Newcomb’s Problem

Newcomb’s Problem asks us to imagine the following situation:

Omega, an unquestionably honest, all knowing agent with perfect powers of prediction, appears, along with two boxes.  Omega tells you that it has placed a certain sum of money into each of the boxes. It has already placed the money and will not now change the amount.  You are then asked whether you want to take just the money that is in the left hand box or whether you want to take the money in both boxes.

However, here’s where it becomes complicated. Using its perfect powers of prediction, Omega predicted whether you would take just the left box (called “one boxing”) or whether you would take both boxes (called “two boxing”).Either way, Omega put $1000 in the right hand box but filled the left hand box as follows:

If he predicted you would take only the left hand box, he put $1 000 000 in the left hand box.

If he predicted you would take both boxes, he put $0 in the left hand box.

Should you take just the left hand box or should you take both boxes?

An answer to Newcomb’s Problem

One argument goes as follows: By the time you are asked to choose what to do, the money is already in the boxes. Whatever decision you make, it won’t change what’s in the boxes. So the boxes can be in one of two states:

  1. Left box, $0. Right box, $1000.
  2. Left box, $1 000 000. Right box, $1000.

Whichever state the boxes are in, you get more money if you take both boxes than if you take one. In game theoretic terms, the strategy of taking both boxes strictly dominates the strategy of taking only one box. You can never lose by choosing both boxes.
The only problem is, you do lose. If you take two boxes then they are in state 1 and you only get $1000. If you only took the left box you would get $1 000 000.

To many people, this may be enough to make it obvious that the rational decision is to take only the left box. If so, you might want to skip the next paragraph.

Taking only the left box didn’t seem rational to me for a long time. It seemed that the reasoning described above to justify taking both boxes was so powerful that the only rational decision was to take both boxes. I therefore saw Newcomb’s Problem as proof that it was sometimes beneficial to be irrational. I changed my mind when I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff. From that perspective, it is rational to use a decision theory that suggests you only take the left box because that is the decision theory that leads to the highest payoff. Taking only the left box lead to a higher payoff and it’s also a rational decision if you ask, “What decision theory is it rational for me to use?” and then make your decision according to the theory that you have concluded it is rational to follow.

What follows will presume that a good decision theory should one box on Newcomb’s problem.

Causal Decision Theory and Newcomb’s Problem

Remember that decision theory tells us to calculate the expected utility of an action by summing the utility of each possible outcome of that action multiplied by its probability. In Causal Decision Theory, this probability is defined causally (something that we haven’t formalized and won’t formalise in this introductory sequence but which we have at least some grasp of). So Causal Decision Theory will act as if the probability that the boxes are in state 1 or state 2 above is not influenced by the decision made to one or two box (so let’s say that the probability that the boxes are in state 1 is P and the probability that they’re in state 2 is Q regardless of your decision).

So if you undertake the action of choosing only the left box your expected utility will be equal to: (0 x P) + (1 000 000 x Q) = 1 000 000 x Q

And if you choose both boxes, the expected utility will be equal to: (1000 x P) + (1 001 000 x Q).

So Causal Decision Theory will lead to the decision to take both boxes and hence, if you accept that you should one box on Newcomb’s Problem, Causal Decision Theory is flawed.

Evidential Decision Theory and Newcomb’s Problem

Evidential Decision Theory, on the other hand, will take your decision to one box as evidence that Omega put the boxes in state 2, to give an expected utility of (1 x 1 000 000) + (0 x 0) = 1 000 000.

It will similarly take your decision to take both boxes as evidence that Omega put the boxes into state 1, to give an expected utility of (0 x (1 000 000 + 1000)) + (1 x (0 + 1000)) = 1000

As such, Evidential Decision Theory will suggest that you one box and hence it passes the test posed by Newcomb’s Problem. We will look at a more challenging scenario for Evidential Decision Theory in the next post. For now, we’re part way along the route of realising that there’s still a need to look for a decision theory that makes the logical decision in a wide range of situations.

Appendix 1: Important notes

While the consensus on Less Wrong is that one boxing on Newcomb’s Problem is the rational decision, my understanding is that this opinion is not necessarily held uniformly amongst philosophers (see, for example, the Stanford Encyclopedia of Philosophy’s article on Causal Decision Theory). I’d welcome corrections on this if I’m wrong but otherwise it does seem important to acknowledge where the level of consensus differs on Less Wrong compared to the broader community.

For more comments see the original Less Wrong post and this post by philosopher Richard Chappell (the post argues against some of the view expressed here which only makes it more important to read it). See also the errata to this post on which I discuss my current views on the rational choice in Newcomb’s Problem (and hence the rational choice in problems to be discussed later as the reasoning is similar).

For more details on this, see the results of the PhilPapers Survey where 61% of respondents who specialised in decision theory chose to two box and only 26% chose to one box (the rest were uncertain). Thanks to Unnamed for the link.

If Newcomb’s Problem doesn’t seem realistic enough to be worth considering then read the responses to this comment.

Appendix 2: Existing posts on Newcomb’s Problem

Newcomb’s Problem has been widely discussed on Less Wrong, generally by people with more knowledge on the subject than me (this post is included as part of the sequence because I want to make sure no-one is left behind and because it is framed in a slightly different way). Good previous posts include:

A post by Eliezer introducing the problem and discussing the issue of whether one boxing is irrational.

A link to Marion Ledwig’s detailed thesis on the issue.

An exploration of the links between Newcomb’s Problem and the prisoner’s dillemma.

A post about formalising Newcomb’s Problem.

And a Less Wrong wiki article on the problem with further links.

The next post is “The Smoking Lesion: A problem for Evidential Decision Theory”

An introduction to decision theory

August 18, 2010 7 comments

This is part 1 of a sequence titled “Less Wrong and decision theory”

Less Wrong collects together fascinating insights into a wide range of fields. If you understood everything in all of the blog posts, then I suspect you’d be in quite a small minority. However, a lot of readers probably do understand a lot of it. Then, there are the rest of us: The people who would love to be able to understand it but fall short. From my personal experience, I suspect that there are an especially large number of people who fall into that category when it comes to the topic of decision theory.

Decision theory underlies much of the discussion on Less Wrong and, despite buckets of helpful posts, I still spend a lot of my time scratching my head when I read, for example, Gary Drescher’s comments on Timeless Decision Theory. At it’s core this is probably because, despite reading a lot of decision theory posts, I’m not even 100% sure what causal decision theory or evidential decision theory is. Which is to say, I don’t understand the basics. I think that Less Wrong could do with a sequence that introduces the relevant decision theory from the ground up and ends with an explanation of Timeless Decision Theory (and Updateless Decision Theory). I’m going to try to write that sequence.

What is a decision theory?

In the interests of starting right from the start, I want to talk about what a decision theory is. A decision theory is a formalised system for analysing possible decisions and picking from amongst them. Normative decision theory, which this sequence will focus on, is about how we should make decisions. Descriptive decision theory is about how we do make decisions.

Decision theories involves looking at the possible outcomes of a decision. Each outcome is given a utility value, expressing how desirable that outcome is. Each outcome is also assigned a probability. The expected utility of taking an action is equal to the sum of the utilities of each possible outcome multiplied by the probability of that outcome occuring. To put it another way, you add together the utilities of each of the possible outcomes but these are weighted by the probability so that if an outcome is less likely, the value of that outcome is taken into account to a lesser extent.

Before this gets too complicated, let’s look at an example:

Let’s say you are deciding whether to cheat on a test. If you cheat, the possible outcomes are, getting full marks on the test (50% chance, 100 points of utility – one for each percentage point correct) or getting caught cheating and getting no marks (50% chance, 0 utility).

We can now calculate the expected utility of cheating on the test:

(1/2 * 100) + (1/2 * 0) = 50 + 0 = 50

That is, we look at each outcome, determine how much it should contribute to the total utility by multiplying the utility by its probability and then add together the value we get for each possible outcome.

So, decision theory would say (questions of morality aside) that you should cheat on the test if you would get less than 50% on the test if you didn’t cheat.

Those who are familiar with game theory may feel that all of this is very familiar. That’s a reasonable conclusion: A good approximation of what decision theory is that it’s one player game theory.

What are causal and evidential decision theories?

Two of the principle decision theories popular in academia at the moment are causal and evidential decision theories.

In the description above, when we looked at each action we considered two factors: The probability of it occurring and the utility gained or lost if it did occur. Causal and evidential decision theories differ by defining the probability of the outcome occurring in two different ways.

Causal Decision Theory defines this probability causally. That is to say, they ask, what is the probability that, if action A is taken, outcome B will occur. Evidential decision theory asks what evidence the action provides for the outcome. That is to say, it asks, what is the probability of B occurring given the evidence of A. These may not sound very different so let’s look at an example.

Imagine that politicians are either likeable or unlikeable (and they are simply born this way – they cannot change it) and the outcome of the election they’re involved in depends purely on whether they are likeable. Now let’s say that likeable people have a higher probability of kissing babies and unlikeable people have a lower probability of doing so. But this politician has just changed into new clothing and the baby they’re being expected to kiss looks like it might be sick. They really don’t want to kiss the baby. Kissing the baby doesn’t itself influence the election, that’s decided purely based on whether the politician is likeable or not. The politician does not know if they are likeable.

Should they kiss the baby?

Causal Decision Theory would say that they should not kiss the baby because the action has no causal effect. It would calculate the probabilities as follows:

If I am likeable, I will win the election. If I am not, I will not. I am 50% likely to be likeable.

If I don’t kiss the baby, I will be 50% likely to win the election.

If I kiss the baby, I will be 50% likely to win the election.

I don’t want to kiss the baby so I won’t.

Evidential Decision Theory on the other hand, would say that you should kiss the baby because doing so is evidence that you are likeable. It would reason as follows:

If I am likeable, I will win the election. If I am not, I will not. I am 50% likely to be likeable.

If I kissed the baby, there would be an 80% probability that I was likeable (to choose an arbitrary percentage).

If I did not kiss the baby, there would be a 20% probability that I was likeable.

Therefore:

Given the action of me kissing the baby, it is 80% probable that I am likeable and thus the probability of me winning the election is 80%.

Given the action of me not kissing the baby, it is 20% probable that I am likeable and thus the probability of me winning the election is 20%.

So I should kiss the baby (presuming the desire to avoid kissing the baby is only a minor desire).

This is making it explicit but the basic point is this: Evidential Decision Theory asks whether an action provides evidence for the probability of an outcome occuring, Causal Decision Theory asks whether the action will causally effect the probability of an outcome occuring.

The question of whether either of these decision theories works under all circumstances that we’d want them to is the topic that will be explored in the next few posts of this sequences.

Appendix 1: Some maths

I think that when discussing a mathematical topic, there’s always something to be gained from having a basic knowledge of the actual mathematical equations underpinning it. If you’re not comfortable with maths though, feel free to skip the following section. Each post I do will, if relevant, end with a section on the maths behind it but these will always be separate to the main body of the post – you will not need to know the equations to understand the rest of the post. If you’re interested in the equations though, read on:

Decision theory assigns each action a utility based on the sum of the probability of each outcome multiplied by the utility from each possible outcome.  It then applies this equation to each possible action to determine which one leads to the highest utility. As an equation, this can be represented as:

Where U(A) is the utility gained from action A. Capital sigma, the Greek letter, represents the sum for all i, Pi represents the probability of outcome i occurring and Di, standing for desirability, represents the utility gained if that outcome occurred. Look back at the cheating on the test example to get an idea of how this works in practice if you’re confused.

Now causal and evidential decision theory differ based on how they calculate Pi. Causal Decision Theory uses the following equation:

In this equation, everything is the same as in the first equation except, in the section referring to probability is, the probability is calculated as the probability of Oi occurring if action A is taken.

Similarly, Evidential Decision Theory uses the following equation:

Where the probability is calculated based on the probability of Oi given that A is true.

If you can’t see the distinction between these two equations, then think back to the politician example.

Appendix 2: Important Notes

The question of how causality should be formalised is still an open one, see cousin_it’s comments on the original post at Less Wrong. As an introductory level post, we will not delve into these questions here but it is worth noting their is some debate on how exactly to interpret causal decision theory.

This post was originally posted to Less Wrong. Visit that post to see comments from readers there.

The next post is “Newcombe’s Problem: A problem for Causal Decision Theories”.