Suppose you have two identical agents with shared finances, and three rooms A1, A2, B.

Flip a fair coin.

  • If the coin comes up H, put the agents in A1, A2.
  • If it comes up T, flip the coin again.
    • If it comes up H, put the agents in A1, B.
    • If it comes up T, put the agents in A2, B.

(At each point, flip another fair coin to decide the permutation, i.e. which agent goes to which room.)

Now to each agent in either A1 or A2, make the following offer:

Guess whether the first coin-flip came up heads or tails. If you correctly guess heads, you both get $1. If you correctly guess tails, you both get $3. No negative marking.

The agents are told which room they are in, and they know how the game works, but they are not told the results of any coin tosses, or where the other agent is, and they cannot communicate with the other agent.

...

In terms of resulting winning, if an agent chooses to precommit to always bet heads, its expected earnings are $1, but if it chooses to precommit to always bet tails, its expected earnings are $1.50. So it should bet tails, if it wants to win.

But consider what happens when the agent actually finds itself in A1 or A2 (which are the only cases it is allowed to bet): if it finds itself in A1, it disqualifies the TT scenario, and if it finds itself in A2, it disqualifies the TH scenario. In either case, the probability of heads goes up to 2/3. So then it expects betting heads to provide an expected return of $1.33, and betting tails to provide an expected return of $1. So it bets heads.

(There are no Sleeping Beauty problems here, the probability genuinely does go up to 2/3, because new information -- the label of the room -- is introduced. BTW, I later learned this is basically equivalent to the scenario in Conitzer 2017, except it avoids talking about memory wiping or splitting people in two or anything else like that.)

What's going on? Is this actually a way to beat superrational agents, or am I missing thing? Because clearly tails is the winning strategy, but heads is what EDT tells the agent to bet.

New Answer
New Comment

4 Answers sorted by

Joachim Bartosik

80

You're ignoring that with probability 1/4 agent ends up in room B.n that case you don't get to decide but you get to collect reward. Which is 3 for (the other agent) guessing T, or 0 for (the other agent) guessing H.

So basically guessing H is increasing your own expected reward at the expense of the other agent's expected reward (before you actually went to a room you didn't know if you'll be an agent which gets to decide or not so your expected reward also included part of expected reward for agent which doesn't get an opportunity to make a guess).

That's not important at all. The agents in rooms A1 and A2 themselves would do better to choose tails than to choose heads. They really are being harmed by the information.

1Dagon
It's totally important. The knowledge that you get paid for guessing T in the cases you're never asked the question is extremely relevant here. It changes the EV from 1/3 * 3 = 1 to 1/3 * 3 + 1/4 * 3 = 1.75.
2Abhimanyu Pallavi Sudhir
No, it doesn't. There is no 1/4 chance of anything once you've found yourself in Room A1. You do acknowledge that the payout for the agent in room B (if it exists) from your actions is the same as the payout for you from your own actions, which if the coin came up tails is $3, yes?

I see, that is indeed the same principle (and also simpler/we don't need to worry about whether we "control" symmetric situations).

2Charlie Steiner
Yeah I'm still not sure how to think about this sort of thing short of going full UDT and saying something like "well, imagine this whole situation was a game - what would be the globally winning strategy?"

Dagon

00

I don't see the problem, and I suspect you're ignoring that the anthropic probability (2/3 probability for A1) is _ONLY_ in those worlds where you get to bet. Your payout is reduced whether you're in B or you're in A and bet wrong.

I _think_ that "always bet T" maximizes each agent's payout.

There are only 3 coins, so 8 possibilities (some of which collapse into the same case, but that doesn't matter). Payouts are for agent A (meaning 1+1 is 1 for A's bet and 1 for B's bet, paid to A). Agent B is symmetrical, so not shown. PayoutHiffA1 is if the strategy is to bet H in room A1 and T in A2. Total is for the rules as given, Agent pay is ONLY the agent's contribution, with no cross-agent payments.

COINS ROOMS PayoutH PayoutT PayoutHiffA1
HHH A1,A2 1+1 0+0 1+0
HHT A2,A1 1+1 0+0 0+1
HTH A1,A2 1+1 0+0 1+0
HTT A2,A1 1+1 0+0 0+1
THH A1,B 0,0 3+0 0+0
THT B,A1 0,0 0+3 0+0
TTH A2,B 0,0 3+0 3+0
TTT B,A2 0,0 0+3 0+3

TOTAL PAY 8 12 10
AGENT PAY 4 6 5

I don't understand what you are saying. If you find yourself in Room A1, you simply eliminate the last two possibilities so the total payout of Tails becomes 6.

If you find yourself in Room A1, you do find yourself in a world where you are allowed to bet. It doesn't make sense to consider the counterfactual, because you already have gotten new information.

player_03

-10

I'm going to rephrase this using as many integers as possible because humans are better at reasoning about those. I know I personally am.

Instead of randomness, we have four teams that perform this experiment. Teams 1 and 2 represent the first flip landing on heads. Team 3 is tails then heads, and team 4 is tails then tails. No one knows which team they've been assigned to.

Also, instead of earning $1 or $3 for both participants, a correct guess earns that same amount once. They still share finances so this shouldn't affect anyone's reasoning; I just don't want to have to double it.

Team 1 makes 2 guesses. Each "heads" guess earns $1, each "tails" guess earns nothing.
Team 2 makes 2 guesses. Each "heads" guess earns $1, each "tails" guess earns nothing.
Team 3 makes 1 guess. Guessing "heads" earns nothing, guessing "tails" earns $3.
Team 4 makes 1 guess. Guessing "heads" earns nothing, guessing "tails" earns $3.

If absolutely everyone guesses "heads," teams 1 and 2 will earn $4 between them. If absolutely everyone guesses "tails," teams 3 and 4 will earn $6 between them. So far, this matches up.

Now let's look at how many people were sent to each room.

Three people visit room A1: one from team 1, one from team 2, and one from team 3. 2/3 of them are there because the first "flip" was heads.
Three people visit room A2: one from team 1, one from team 2, and one from team 4. 2/3 of them are there because the first "flip" was heads.
Two people visit room B: one from team 3 and one from team 4. They don't matter.
The three visitors to A1 know they aren't on team 4, thus they can subtract that team's entire winnings from their calculations, leaving $4 vs. $3.
The three visitors to A2 know they aren't on team 3, thus they can subtract that team's entire winnings from their calculations, leaving $4 vs. $3.

Do you see the error? Took me a bit.

If you're in room A1, you need to subtract more than just team 4's winnings. You need to subtract half of team 1 and team 2's winnings. Teams 1 and 2 each have someone in room A2, and you can't control their vote. Thus:

Three people visit room A1: one from team 1, one from team 2, and one from team 3. If all three guess "heads" they earn $2 in all. If all three guess "tails" they earn $3 in all.
Three people visit room A1: one from team 1, one from team 2, and one from team 4. If all three guess "heads" they earn $2 in all. If all three guess "tails" they earn $3 in all.

Guessing "tails" remains the best way to maximize expected value.

---

The lesson here isn't so much to do with EDT agents, it's to do with humans and probabilities. I didn't write this post because I'm amazing and you're a bad math student, I wrote this post because without it, I wouldn't have been able to figure it out either.

Whenever this sort of thing comes up, try to rephrase the problem. Instead of 85%, imagine 100 people in a room, with 85 on the left and 15 on the right. Instead of truly random experiments, imagine the many-worlds interpretation, where each outcome is guaranteed to come up in a different branch. (And try to have an integer number of branches, each representing an equal fraction.) Or use multiple teams like I did above.

I don't think this is right. A superrational agent exploits the symmetry between A1 and A2, correct? So it must reason that an identical agent in A2 will reason the same way as it does, and if it bets heads, so will the other agent. That's the point of bringing up EDT.

1player_03
Oh right, I see where you're coming from. When I said "you can't control their vote" I was missing the point, because as far superrational agents are concerned, they do control each other's votes. And in that case, it sure seems like they'll go for the $2, earning less money overall. It occurs to me that if team 4 didn't exist, but teams 1-3 were still equally likely, then "heads" actually would be the better option. If everyone guesses "heads," two teams are right, and they take home $4. If everyone guesses "tails," team 3 takes home $3 and that's it. On average, this maximizes winnings. Except this isn't the same situation at all. With group 4 eliminated from the get go, the remaining teams can do even better than $4 or $3. Teammates in room A2 knows for a fact that the coin landed heads, and they automatically earn $1. Teammates in room A1 are no longer responsible for their teammates' decisions, so they go for the $3. Thus teams 1 and 2 both take home $1 while team 3 takes home $3, for a total of $5. Maybe that's the difference. Even if you know for a fact that you aren't on team 4, you also aren't in a world where team 4 was eliminated from the start. The team still needs to factor into your calculations... somehow. Maybe it means your teammate isn't really making the same decision you are? But it's perfectly symmetrical information. Maybe you don't get to eliminate team 4 unless your teammate does? But the proof is right in front of you. Maybe the information isn't symmetrical because your teammate could be in room B? I don't know. I feel like there's an answer in here somewhere, but I've spent several hours on this post and I have other things to do today.
1player_03
I do want to add - separately - that superrational agents (not sure about EDT) can solve this problem in a roundabout way. Imagine if some prankster erased the "1" and "2" from the signs in rooms A1 and A2, leaving just "A" in both cases. Now everyone has less information and makes better decisions. And in the real contest, (super)rational agents could achieve the same effect by keeping their eyes closed. Simply say "tails," maximize expected value, and leave the room never knowing which one it was. None of which should be necessary. (Super)rational agents should win even after looking at the sign. They should be able to eliminate a possibility and still guess "tails." A flaw must exist somewhere in the argument for "heads," and even if I haven't found that flaw, a perfect logician would spot it no problem.