I am an easily bored Omega-level being, and I want to play a game with you.
I am going to offer you two choices.
Choice 1: You spend the next thousand years in horrific torture, after which I restore your local universe to precisely the state it is in now (wiping your memory in the process), and hand you a box with a billion dollars in it.
Choice two: You spend the next thousand years in exquisite bliss, after which I restore your local universe to precisely the state it is in now (wiping your memory in the process), and hand you a box with an angry hornet's nest in it.
Which do you choose?
Now, you blink. I smile and inform you that you made your choice, and hand you your box. Which choice do you hope you made?
You object? Fine. Let's play another game.
I am going to offer you two choices.
Choice 1: I create a perfect simulation of you, and run it through a thousand simulated years of horrific torture (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with a billion dollars in it.
Choice 2: I create a perfect simulation of you, and run it through a thousand simulated years of exquisite bliss (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with an angry hornet's nest in it.
Which do you choose?
Now, I smile and inform you that I already made a perfect simulation of you and asked it that question. Which choice do you hope it made?
Let's expand on that. What if instead of creating one perfect simulation of you, I create 2^^^^3 perfect simulations of you? Which do you choose now?
What if instead of a thousand simulated years, I let the boxes run for 2^^^^3 simulated years each? Which do you choose now?
I have the box right here. Which do you hope you chose?
Basically, yes. This is what EY called "symmetrism" in 3 Worlds collide and Greg Egan described in one of his short stories. Basically, a more sophisticated version of "do unto others...".
If this is the point, I object to the way it is conveyed by the post.
First, its name somehow suggests that it's about value while the problem is rather one of game theory. (One may make a case for integrating the symmetric preferences among one's terminal values but it isn't the only possible solution.)
Second, thought experiments should limit the counter-intuitive elements to the necessary minimum. We may need to have simulations here, but why thousand years of torture and 2^^^^3 simulations? These things are unnecessarily distracting from the main point, if the main point isn't about scope insensitivity but it is instead what you think it is.
Third and most importantly: In similar thought experiments, Omega is assumed to be completely trustworthy. But it is not trustworthy towards the simulations - it too tells them that it is going to simulate them and torture the (second-order) simulations depending on their (the first-order simulations) decision, but it isn't true: There are no second-order simulations and the first order simulations are going to be tortured based on the decision of the unsimulated participant. So it is, if the participant accepts anthropic reasoning for this case, p = 1/n that he is "real" and p = (n-1)/n that (he is simulated and Omega isn't trustworthy). If, on the other hand, Omega didn't tell the simulations the same thing as it has told to the "real" person, what Omega has told could be used to discriminate between the simulated and real case and the anthropic reasoning leading to the conclusion that one is likely a simulation wouldn't apply.
In short, taking into consideration that I may be a simulation that Omega is speaking about is incoherent without considering that Omega may be lying. There may be clever reformulations that avoid this problem, but I don't see any at the moment.