leeping Beauty is put to sleep on Sunday. If the coin lands on heads, she is awakened only on Monday. If it lands on tails, she is awaken on Monday and Tuesday, and has her memory erased between them. Each time she is awoken, she is asked how likely it is the coin landed on tails.

According to the one theory, she would figure it's twice as likely to be her if the coin landed on tails, so it's now twice as likely to be tales. According to another, she would figure that the world she's in isn't eliminated by heads or tails, so it's equally likely. I'd like to use the second possibility, and add a simple modification:

The coin is tossed a second time. She's shown the result of this toss on Monday, and the opposite on Tuesday (if she's awake for it). She wakes up, and believes that there are four equally probable results: HH, HT, TH, and TT. She then is shown heads. This will happen at some point unless the coin has the result HT. In that case, she is only woken once, and is shown tails. She now spreads the probability between the remaining three outcomes: HH, TH, and TT. She is asked how likely it is that the coin landed on heads. She gives 1/3. Thanks to this modification, she got the same answer as if she had used SIA.

Now suppose that, instead of being told the result of second coin toss, she had some other observation. Perhaps she observed how tired she was when she woke up, or how long it took to open her eyes, or something else. In any case, if it's an unlikely observation, it probably won't happen twice, so she's about twice as likely to make it if she wakes up twice.

Edit: SIA and SSA don't seem to be what I thought they were. In both cases, you get approximately 1/3. As far as I can figure, the reason Wikipedia states that you get 1/2 with SIA is that it uses sleeping beauty during the course of this experiment as the entire reference class (rather than all existent observers). I've seen someone use this logic before (they only updated on the existence of such an observer). Does anyone know what it's called?

New Comment
25 comments, sorted by Click to highlight new comments since:

The modification you're proposing is analogous to a modification (I think originally due to Michael Titelbaum) called "Technicolor Beauty", in which Beauty sees either a red or a blue piece of paper in her room when she awakens on Monday (determined by a fair coin toss, independently of the "main" coin toss that decides if she's woken once or twice), and on Tuesday (if she's awakened) sees a piece of paper of whichever color she didn't see on Monday. I'll use this example rather than yours because it requires less specification about which coin toss we're talking about. Let "RB" be the hypothesis that the "main" coin toss landed Tails and Red was shown on Monday and Blue was shown on Tuesday. Let BR be the same, except Blue on Monday and Red on Tuesday.

Titelbaum used this to generate the "thirder" (SIA) answer to the problem, but SSA doesn't actually give the same answer, as you suggest it does. Even though Beauty is twice as likely to observe red paper at some point in the experiment, at no point do her conditional probabilities (e.g. for observing red, conditional on Heads or Tails) differ. Briefly: conditional on Heads, she expects to see red with probability 0.5 (because the coin toss was fair). Conditional on Tails, suppose Beauty has her eyes shut while calculating her conditional probabilities for observing red upon opening them, and evenly splits her (conditional) credence between it being Monday and it being Tuesday (SSA requires this). Now, if it's Monday, Beauty's credence in RB and BR is 0.5 for both, so she expects to see red with probability 0.5. Same goes for Tuesday.

It seems I misunderstood what SSA and SIA were. I have corrected this.

For what it's worth, they give roughly the same answers as long as there is a large number of observers that aren't in the experiment. The paper has nothing to do with it.

[-][anonymous]00

Alright, now it's time for my comment about why saying "I'd like to use the SSA" (or, for that matter, "I'd like to use the SIA") is misguided.

Suppose every time Beauty wakes up, she is asked to guess whether the coin landed Heads or Tails. She receives $3 for correctly saying Heads, and $2 for correctly saying Tails.

The SIA says Pr[Heads] = 1/3 and Pr[Tails] = 2/3, so saying Heads has an expected value of $1, and Tails an expected value of $1.33. On the other hand, the SSA says Pr[Heads]=Pr[Tails]=1/2, so saying Heads is expected to win $1.50, while saying Tails only wins $1.

These indicate different correct actions, and clearly only one of them can be right. Which one? Well, suppose Beauty decides to guess Heads. Then she wins $3 when Heads comes up. On the other hand if Beauty decides to guess Tails, she wins $4. So the SIA gives the "correct probability" in this case.

On the other hand, suppose the rewards are different. Now, suppose Beauty receives money on Wednesday -- $3 if she ever correctly said Heads, and $2 if she ever correctly said Tails. In this case, the optimal strategy for Beauty is to act as though Pr[Heads]=Pr[Tails]=1/2, as suggested by the SSA.

Of course, proponents of either assumption, assuming a working knowledge of probability, are going to make the correct guess in both cases, if they know how the game works; suddenly there is no more disagreement. I therefore argue that the things that these assumptions call Pr[Heads] and Pr[Tails] are not the same things. The SIA calculates the probability that the current instance of Beauty is waking up in a Heads-world or a Tails-world. The SSA calculates the probability that some instance of Beauty will wake up in a Heads-world or a Tails-world.

The way I phrase it makes them sound more different than they are, because this latter event is also the event that every instance of Beauty will wake up in a Heads-world or a Tails-world. Since it's certain that the current instance of Beauty wakes up in the same world that every instance of Beauty wakes up in, it's unclear why these probabilities are different.

This ambiguity disappears once you start talking some hand-wavy notion of probability that feels like it's perfectly okay to disagree about, and fix a concrete situation in which you need the correct probability in order to win, as illustrated in the example above.

(One final comment: by using payoffs of $2 and $3, I am technically only determining whether the probability in question is above or below 2/5. Since this separates 1/2 and 1/3, it is all that is necessary here, but in principle you could also use log-based payoffs to make Beauty give an actual probability as an answer.)

Are we dealing with the optimal strategy for her to decide on before-hand, or the one she should decide on mid-experiment?

She may have evidence in the middle of the experiment that she didn't before, as such, the optimal choice may be different. It's similar to Parfit's Hitch-hiker.

[-][anonymous]00

But in this case, she doesn't get any evidence in the middle of the experiment that she didn't before. If she did, then the optimal choice could be different. But she doesn't.

Yes she does. She finds out she's in the middle of the experiment. Before, she found out she was at the beginning of the experiment. Being at the beginning of the experiment has the same probability either way, but being in the middle does not.

[-][anonymous]00

No matter what, she can decide on the optimal strategy for what to do once she wakes up. What information, exactly, does she get in the middle of the experiment that she cannot anticipate beforehand?

What information, exactly, does she get in the middle of the experiment that she cannot anticipate beforehand?

That she's the one in the experiment. She can't anticipate it before-hand because she doesn't know the probability of being in the experiment. It depends on whether the coin lands on heads or tails.

Imaging someone takes a deck of cards. They then flip a coin. On heads, they add a joker. On tails, they add two. They don't show you the result. You then draw a card. Can you anticipate the probability of getting a joker? If the only observers in the universe were created solely for that experiment, and each of them was given one of the cards, would that change anything?

[-][anonymous]00

She can still anticipate the possibility of being in the experiment and therefore make a strategy for what to do if she turns out to be in the experiment. That is all I'm doing here.

If she comes up with a strategy for what to do if she wakes up in the experiment, and then wakes up in the experiment, she doesn't get any additional information that would change her strategy.

Consider Parfit's hitchhiker. If your strategy is to pay him the money, you'll do better, but when it comes time to implement that strategy, you have information that makes it pointless (you know he already picked you up, and he can't not have done it in response to you not paying him).

In this case, the evidence is certain, but it can be modified so that the amount of evidence you have before making the decision is arbitrary.

[-][anonymous]00

Parfit's hitchhiker can still predict that when he has been picked up, he will have the option of not paying. Is there anything in the argument I actually make that you are objecting to?

[-][anonymous]00

Okay, let's start over. Your math is incorrect. When assuming SSA:

  1. In the HH outcome, you are certain to be shown heads. Pr[-H|HH] = 1.
  2. In the HT outcome, you will never be shown heads. Pr[-H|HT] = 0.
  3. In the TH outcome, you have a 50% chance of being the observer who is shown heads. Pr[-H|TH] = 1/2.
  4. In the TT outcome, you have a 50% chance of being the observer who is shown heads. Pr[-H|TT] = 1/2.

Thus the probability of H* is (1+0)/(1 + 0 + 1/2 + 1/2) = 1/2. The extra coin toss cancels out.

Edit: this is essentially the same argument as in AlexSchell's comment.

you have a 50% chance of being the observer

Once you introduce that, you have SIA. If you can talk about having a 50% chance of being the observer who is shown heads, wouldn't you also have a specific chance of being the observer who wakes up in a room running that experiment?

Looking into it more, it seems SSA is the one I agree with. I just always assume that there are more people in her reference class that aren't in the experiment, so I get a different answer than what Wikipedia gave. No wonder I "got them confused" at first.

How exactly do you get 50% in this thought experiment?

[-][anonymous]00

Okay. Read this line carefully. I'm taking it straight from Wikipedia and you linked to the Wikipedia articles, so you must have read it. "All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class."

In this case, conditioned on the two coinflips being TH, the observers in the reference class are all the observers in the TH world, but after waking up in the experiment, we know that only two possible observers remain: Beauty waking up on Monday, and Beauty waking up on Tuesday. We have no information to select one of these over the other and so each is 50% likely. Therefore the probability of being shown heads, given that the two coinflips are TH, is 50% by the SSA.

The difference between SSA and SIA is that in SSA we first randomly pick one of the possible worlds, and then pick one of the possible observers in that world. In SIA, we randomly pick one of the possible observers in all worlds.

Edit: to emphasize -- it is irrelevant how many other observers there are besides Sleeping Beauty. Once Beauty wakes up and looks around, she knows that she is Sleeping Beauty as opposed to, say, the person running the experiment. However, she has no additional information on which instance of Sleeping Beauty she is, which is what the thought experiment is all about.

We could postulate some additional number of observers that wake up in the same situation for completely different reasons -- say, someone else is running a simulation of lots of people waking up. In that case, the probabilities of 1/3 and 1/2 are conditional probabilities -- conditional on Sleeping Beauty actually being part of this experiment. In the original formulation of the problem, we do not postulate these additional observers waking up -- because their existence is independent of the coin flips, including them or not does not differentiate between the SIA and the SSA, so we either don't think about them or we deal with the conditional probabilities.

What I'm arguing against is apparently neither SIA or SSA. I made a mistake. Are we arguing about what I originally intended to, or about my statement that they both predict 1/3?

My intent was to argue the, if you only update on the existence of an observer, rather than anything about how unlikely it is to be them, the probability will work out the same.

If you would like to discuss why SIA and SSA give the same result:

For simplicity, we'll assume 1 trillion observation days outside the experiment.

SIA: Sleeping beauty wakes up in this experiment. There are 2 trillion 3 possible observers, three of which wake up here. Of them, one woke up in a universe with heads, and the other in a universe with tails. The probability of being the one with heads is 1/3.

SSA: Sleeping beauty has an even prior, so the odds ratio of heads to tails is 1:1. She then wakes up in this experiment. If the coin landed on heads, there's a 1/1 trillion 1 chance of this. If the coin landed on tails, there's a 2/1 trillion 2 chance of this. This is an odds ratio of 500,000,000,001:1,000,000,000,001 for heads. Multiplying this by 1:1 yields 500,000,000,001:1,000,000,000,001. The total probability of heads is 1/3 + 2*10^-13.

Your problem seems to be updating on the fact that she's in the experiment without taking into account that this is about twice as likely if the coin landed on heads.

[-][anonymous]00

...huh. You have a point. I'll have to think about this for a bit, but it seems right, and if this is what you've been trying to get at this whole time I think everyone may have misunderstood you.

Because there's no information flow between the coins, to stay self-consistent the "always assign 50% to heads" method has to not change its probabilities under irrelevant information. So this isn't so much a reconciliation as a demonstration that always assigning 50% to heads violates an axiom.

under irrelevant information

Irrelevant information is just information that doesn't change the probabilities. If this does, it's relevant.

So this isn't so much a reconciliation

It's not a reconciliation. They get about the same results, not exactly the same.

Irrelevant information is just information that doesn't change the probabilities. If this does, it's relevant.

Irrelevant information is just information that doesn't change the probabilities as long as you follow the axioms of probability. If we speculate that always assigning 50% to heads can be the wrong method, i.e. axiom-violating, then deciding which information is relevant based on it is putting the cart before the horse.

What other ways can you tell whether information is relevant? Bayes' rule is a good tool for it, because you know it follows the right axioms of probability. Here is the path I followed: If the probabilities of the two "worlds" are 1/3 and 2/3, you expect to see 50% heads and 50% tails on the second coin. If the probabilities in the two worlds are 1/2 and 1/2, you still expect 50/50. The probabilities on the second coin are then 50/50 no matter which rule is right. If see heads or tails, then Bayes' rule says we should update our probabilities by a factor of P(B|A)/P(B), or 0.5/0.5, or 1. No change. Since we trust Bayes' rule to follow the axioms of probability, something that disagrees with it doesn't.

Or you might go the conservation of expected evidence route. If the second coin landing heads makes you change your probabilities one way, conservation of expected evidence (another thing that has a fairly short and trustworthy derivation from the axioms of probability) says that the coin landing tails should make you change your probabilities the opposite way. Does it?

The underlying "why" reason the information is irrelevant is because in our causal world, you don't get a correlation (i.e. information about one from knowing the other) without having a causal path between the two events - like a common ancestor, or conditioning on a common causal descendant. But the coinflips were independent when flipped, and we didn't condition on any causal descendants of the second coinflip (like, say, creating more copies).

[-][anonymous]00

Leaving aside the SSA vs. SIA question, which I believe is fundamentally misguided but what do I know, you are incorrect here:

Thanks to this modification, she got the same answer as if she had used SSA.

When we make this modification, SIA gives the answer 1/3 (same as without it). SSA gives the answer 1/2 (same as without it). The modification does nothing. In the case of the unlikely observation (which doesn't have to be unlikely, actually -- we can compute the answer in terms of the unknown probability and it cancels out anyway), there is also no difference.

P(HH) = 1/3

P(HT) = 0

P(TH) = 1/3

P(TT) = 1/3

P(H*) = P(HH) + P(HT) = 1/3

Did I make an error? Should all of the probability have been moved to the HH case? If so, why? I understand that SSA essentially just tells you to eliminate the impossible. There is nothing impossible about TT or HH.

which I believe is fundamentally misguided

If you have another suggestion, I'd like to see it.

[-][anonymous]00

Oh, you edited the post (the version I read stated you were using the SIA, and getting the same answer as the SSA). I thought you were saying that with this modification, SIA and SSA gave the same answer, which is false.

I'm not entirely sure what you're saying now, and figuring it out will have to wait until tomorrow. For now, since my objection appears to have been based on a typo, I retract it.

It seems I didn't completely correct my error. Fixed now.

I was saying they gave the same answer. My error was just calling them by the wrong names.

I'd like to use the SSA

Cthulhu's woundings, why?

a simple modification...Thanks to this modification, she got the same answer as if she had used SIA.

Assume it's -40 degrees Celsius outside. Thanks to this assumption, it's also -40 degrees Fahrenheit, we got the same answer as if we were using Fahrenheit.

In any case, if it's an unlikely observation, it probably won't happen twice, so she's about twice as likely to make it if she wakes up twice.

If it's a likely observation, such as noting that she is awake, then she is twice as likely to make it if she wakes up twice.

Cthulhu's woundings, why?

To show that it gives the same answers.

Thanks to this assumption ... we got the same answer as if we were using Fahrenheit.

I showed at the end that this assumption is, for all intents and purposes, always going to happen.

If it's a likely observation, such as noting that she is awake, then she is twice as likely to make it if she wakes up twice.

It's 100% either way. It's not twice as likely.