In this post, I discuss the Sleeping Beauty problem, which explores how perception can affect assessment of probabilities. First, the video that introduced me to the concept:
If you’re not in a position to or would rather not watch the video, here’s the gist of the puzzle: You have devised a means of administering an amnesiac that causes a person to lose short-term memory of a specific event. You have arranged with Sleeping Beauty, who only awakens when someone wakes her up, that she will go to sleep on Saturday and you will toss a coin on Sunday. If the coin toss is heads, you will wake her up on Monday, ask her a question, and send her on her way. If the coin toss is tails, you will wake her up on Monday, ask her the question, put her back to sleep, administer the amnesiac so she forgets having woken up, wake her up on Tuesday, ask her the question, and send her on her way.
Phew. What a set-up.
Anyway, the question you will ask is, “What do you think is the probability that the coin came up heads?”
The underlying probability here is straightforward: The coin toss is independent of anything that comes after it, and so there is a 1/2 chance that the coin came up heads. The key aspect to this problem is that Sleeping Beauty is explicitly asked about her perception, rather than about actual probability. Sleeping Beauty may reason that the coin has a 1/2 chance regardless, so that hasn’t changed; she may instead reason that there are three possible scenarios she could be in (heads and Monday [\(H_M\)], tails and Monday [\(T_M\)], or tails and Tuesday [\(T_T\)]), and those three scenarios are identical to her, so heads a 1/3 chance now.
Supporters of the 1/3 scenario, called “thirders”, have a surprising amount of traction. As Roger White notes, the majority of published articles, from some fairly august scholars, support the 1/3 position. The so-called “halfer” position, where the coin’s probability is not affected by Sleeping Beauty’s post hoc perceptions, are in the definite minority.
Perception and Probability
This argument reminds me of the philosophical bloodshed that continues over the so-called Monty Hall problem. In the case of the Monty Hall problem, the controversy was not over anyone’s perception, it is over straightforward probability. Here is the problem: You’re on a game show. The host asks you to choose between three identical doors. Behind one of the doors is a new car; behind each of the other two is a goat. The host knows where each item is. You choose a door. Before the host opens that door, he opens one of the other doors, knowing that he will be revealing a goat. He then asks you if you want to change your door selection. Should you do so?
This is somewhat the mirror image of the Sleeping Beauty problem. Many people, including professors at institutions like MIT, have argued that it doesn’t matter whether you switch doors or not, because the probability is now 1/2. This is because you’ve received new information. There are only two doors where the car could be.
The key to the Monty Hall problem is that the host knows where the car is, though. Regardless of which door you choose, it is possible for him to reveal a goat behind one of the other two doors. If you chose the one with the car (1/3 chance) at the outset, he can open either of the other doors. If you chose one of the doors with a goat (2/3 chance) at the outset, he can open the other door hiding a goat. The fact that there was a 1/3 chance that you chose the door with the car at the outset hasn’t changed, and so it should still be 1/3. So it’s in your best interest to switch doors.
Here’s another puzzle, courtesy of Barry is Puzzled: Imagine there are three empty, identical dust-covered boxes. Adam and Bob are shown the boxes, then blindfolded. Carl puts a coin inside one of the boxes, then removes the blindfolds and tells Adam and Bob what’s happened, asking each what the probability that the coin is in the first box. Adam says it’s 1/3. Bob, noticing that dust has been disturbed only on the first two boxes, says it’s 1/2. Here is part of the philosophical essense of the Sleeping Beauty question: The perception of the problem is based on how the perceiver interprets the data given. Adam sees three boxes, and does not see any reason to modify the original estimate; Bob sees data which allows him to eliminate one box, so he does so.
To be fair to Adam, it’s possible that Carl is playing a little prank. Carl could have carefully opened the third box in such a way as to leave the dust in place, and then deliberately disturbed the dust on the other two boxes. In this case, Adam’s (and Bob’s, for that matter) assessment of the probabilities will be based on how much Carl is to be trusted. So while we can easily cluck over silly Adam and praise Bob for seeing the obvious, it’s not that straightforward.
I offer yet another variant: There is a nature center with two walking paths. The paths start at the same point. The first path meanders to the east for a while and then ends up at the Nature Center. The second path meanders to the west, passes by the Nature Center, then circles on a bit more before ending up at the Nature Center. People who take the first path must stop for good at the Center; they’re not allowed to go to the loop on the second path. If people are randomly assigned to paths at the beginning, what is the probability that a given visitor is on the first path?
It’s fairly easy to see how the Nature Center administrator would think it’s 1/3, even though they know it’s 1/2: They will have twice as many visits from Second Path people as from First Path people. But this is a paradox, because they already know it’s 1/2.
This is, I believe, part of the philosophical aspect that Galef refers to at the end of her video: Even if we can reason out an obvious logical truth, we will still be swayed by intuition based on faulty perception.
However, I don’t think this actually applies to Sleeping Beauty herself, which I’ll discuss below.
Thirder Logic
Adam Elga has presented a logic-based argument for the Thirder Position. This relies on two claims:
- \(P(T_M) = P(T_T)\)
- \(P(H_M) = P(T_M)\)
The first claim is not at all controversial: It is common to both the Thirders and the Halfers. Since Sleeping Beauty has no way of knowing what day it is when she wakes up (because, if it’s Tuesday, she’s been given an amnesiac to forget Monday happened), the odds are the same.
The second claim is the controversial one. If Sleeping Beauty knows that it is Monday, then she knows there’s a 1/2 probability that the coin toss was heads. Therefore, goes the argument, if it’s Monday, then there’s even odds that heads were tossed. Therefore, on Monday, \(P(H_M) = P(T_M)\). Elga has proven his point.
Thus, the probability of each event is the same, and \(\frac{1}{2} = \frac{1}{3}\).
White and I are unconvinced. White offers an interesting twist in which there’s a second random event. The coin toss determines when the second event will be triggered: The second random event determines whether we’ll try to wake Sleeping Beauty up on that scenario.
For instance, let’s say that we’re going to roll a six-sided die before deciding to wake Sleeping Beauty up on a particular attempt (one means she wakes up; anything else means she doesn’t), and let’s take the Halfer probabilities of each scenario. So the probability of waking her up becomes:
- \(P(H_M) = \frac{1}{2} \times \frac{1}{6} = \frac{1}{12}\)
- \(P(T_M) = \frac{1}{4} \times \frac{1}{6} = \frac{1}{24}\)
- \(P(T_T) = \frac{1}{4} \times \frac{1}{6} = \frac{1}{24}\)
The relative probability of each particular scenario stays the same. However, the probability of her getting woken up at all is lower for heads than for tails. For heads, it’s \(P(H) = \frac{1}{6} = \frac{6}{36}\), while for tails, it’s \(P(T) = 1 – \frac{5}{6}\times\frac{5}{6} = \frac{11}{36}\). Now, there are 11:6 odds that if she wakes up at all, it’s because we tossed tails. If we use a 100-sided die instead, the odds become 199:100 that if she wakes up at all, it’s because we tossed tails. As our random selection device makes it less and less likely that we decide to wake her up, the probability that, if she wakes up at all, it’s because we tossed heads, converges to 1/3.
Sleeping Beauty’s Perspective
Elga’s argument relies on Sleeping Beauty’s assessment on Monday, knowing that it’s Monday. However, for Sleeping Beauty, it’s always Monday (h/t to my wife for this observation). If the problem is about how perception affects probability, there’s no issue from Sleeping Beauty’s perspective: When she wakes up on Tuesday, she has no way of knowing it’s not Monday until we explicitly tell her. If we decide to wake her up 300 times, repeating the amnesiac each time, then she’ll always think it’s Monday.
So while Elga’s probability statements are both correct from Sleeping Beauty’s perspective, \(T_M\) does not mean the same thing in each statement. In the first statement, it refers to the superficially true statement that, if tails is tossed, Sleeping Beauty has no way of knowing whether it’s Monday or Tuesday and can conclude that the probabilities are equal. In the second statement, \(T_M\) refers to all cases where Sleeping Beauty wakes up thinking it might be Monday.
Inherent to Elga’s assessment is a paradox: Sleeping Beauty concludes that the probability that heads were tossed is 1/3 based on the knowledge that the probability that heads were tossed is 1/2. It is begging the question in reverse: The only way that Sleeping Beauty can conclude anything about probability is if she already knows the probability, which then leads her to false conclusion about it.
There’s another paradox in the Thirder position. If she concludes that there’s a 1/3 chance of being in each scenario, then she concludes that there’s a 1/3 possibility that, when she wakes up, it’s Tuesday. A 1/3 probability that it’s heads and a 1/3 probability that it’s Tuesday would yield the following probability table, regarding her perception of a particular awakening:
Monday | Tuesday | |
Heads | \(\frac{2}{9}\) | \(\frac{1}{9}\) |
Tails | \(\frac{4}{9}\) | \(\frac{2}{9}\) |
Being the clever but logically misguided lady that she is, Sleeping Beauty realizes that the upper right represents an impossibility, so her assessment that there’s a 1/3 probability all around leads to a 2:4:2 odds ratio regarding the other three scenarios. Her determination that there are equal odds of all three scenarios leads her to a contradictory determination. There is now a 1/4 chance (2:6) that heads were tossed.
If she repeats this logic again, she gets this table:
Monday | Tuesday | |
Heads | \(\frac{3}{16}\) | \(\frac{1}{16}\) |
Tails | \(\frac{9}{16}\) | \(\frac{3}{16}\) |
… from which she concludes that there’s a 1/5 chance that we flipped heads.
She has no reason to stop, and eventually she comes to realize that, following the logic that if she uses correct information about probability to derive incorrect information about probability, then it’s definitely Monday and it was definitely tails and we’re about to put her to sleep and never wake her up again.
Conclusion
There’s a perceptual error going on here, but it’s not on Sleeping Beauty’s part. As I said, for Sleeping Beauty, it’s always Monday so she has no reason to change her assessment about the coin. She doesn’t know the difference between “tails and Monday” and “tails and Tuesday” because on Tuesday, she doesn’t remember Monday happening.
And yet, there are numerous articles defending the Thirder position, and precious few defending the Halfer position.
I really think Elga’s argument gets at the point of the matter, perhaps not for the reason Elga thinks: It shows how we can start with obvious, plain conclusions about how probability works and reason our way into paradoxes. How can we start with the knowledge that Sleeping Beauty knows the coin toss is fair and from that knowledge conclude that the Sleeping Beauty will think it’s unfair?
Galef’s more simplistic analysis, that Sleeping Beauty realizes that there are three scenarios in which she wakes up and then arbitrarily (and falsely) concludes that the three have equal chances of happening, at least better illustrates how people unfamiliar with logic will come to false conclusions.
To better illustrate the point of human-centered effects on probabilistic assessments, consider this one: What is the probability of life occurring on a given planet? There are various scientific assessments of that, but the general consensus is that the probability that, at a randomly selected point in time, on a randomly selected planet, there is sentient life is very, very, very low. We have trouble understanding how low because obviously, at this point in time, on this planet, there is sentient life. So we are likely to grossly overestimate the general probability (as we did throughout the Golden Age of science fiction, where we mused about the nature of sentient life on Venus and Mars).
And that’s an important discussion: If people who don’t know much about logic can come to terribly wrong conclusions about the probability of events occurring, then it’s going to be more difficult to convince people about impending tragic events (such as long-term anthropogenic climate change). When we have a cold winter, that’s immediate evidence against Global Warming (so goes the argument).
But what of the trained logicians who likewise stumble on the Sleeping Beauty and Monty Hall problems? Elga’s argument is more sophisticated than the one Galef presents, and yet it still concludes that Sleeping Beauty will come to a contradictory conclusion (the odds of heads is 1:2, therefore the odds of heads is 1:3). Even if the defense is that we’re talking about Sleeping Beauty’s perceptions, not factual probability, we’re still relying on Sleeping Beauty not noticing the contradiction in her thought patterns.
The greater point, for both expert and non-expert, is that we know that the probability ought to be one thing but we feel that it ought to be another thing. Monty Hall is confusing (in part) because, at the end, we’re faced with a prototypical choice: Door A or Door B. One has a car, one has a goat. Flip a coin. Pick one. And I can show you charts and explanations to prove to you that, no, Monty is having you on and you ought to switch doors, but that’s not convincing to the part of your human brain that says, “Look, it’s either-or.” Likewise, Sleeping Beauty could be in any of three time-coin toss combinations, and don’t they all have the same probability? Right?
Wrong. Forget the coin. We tell Sleeping Beauty we’re going to wake to randomly wake her up some day in the future. We wake her up and ask her the probability that it’s the weekend.
Now, based on the perceptual probabilistic analysis, her answer will depend on how she slices up time. If she thinks, “Well, it’s either the weekend or not, those are two events that have the same chance of happening”, then she’ll conclude that there’s a 1/2 chance that it’s the weekend. If she thinks, “Well, there are seven days in the week, and there are two weekend days in the week”, then she’ll conclude that there’s a 2/7 chance that it’s the weekend. And if she’s pedantic and clever enough, she might add up all the special days in the year that qualify as weekend days (Memorial Day, Labor Day, Thanksgiving Day, Thanksgiving Friday, and so on), and conclude that the chances are slightly more than 2/7.
And I’ve said, it’s problematic that what she decides will be based on how she perceives the question and how well or poorly she understands logic. Part of my first reaction to this question was, “If the question is, ‘What does Sleeping Beauty believe about the coin toss?’, my answer is, ‘I don’t know, I’m not Sleeping Beauty.'”
Addendum
It’s fair to point out that there are phrasings of the problem for which the Thirder position makes sense. These generally focus on the question only being asked once during the experiment. For instance, imagine that before the experiment is run, the experimenters decide to ask Sleeping Beauty the question only in one of the three possible situations (it’s Heads, it’s Monday and Tails, or it’s Tuesday and Tails). In that case, there’s a 1/3 chance that the question will be asked at all if the coin toss is Heads and a 2/3 chance that the question will be asked at all if the coin toss is Tails. In that scenario, it is not Sleeping Beauty being awakened but rather her being asked the question that should trigger her changing her assessment of the coin toss. When she wakes up, she believes there’s a 1/2 probability that the coin toss was heads; when she is asked the question (which she cannot answer until it is answered), there’s a 1/3 probability.