(Attention conservation notice: this post contains no new results, and will be obvious and redundant to many.)
Not everyone on LW understands Wei Dai's updateless decision theory. I didn't understand it completely until two days ago. Now that I had the final flash of realization, I'll try to explain it to the community and hope my attempt fares better than previous attempts.
It's probably best to avoid talking about "decision theory" at the start, because the term is hopelessly muddled. A better way to approach the idea is by examining what we mean by "truth" and "probability" in the first place. For example, is it meaningful for Sleeping Beauty to ask whether it's Monday or Tuesday? Phrased like this, the question sounds stupid. Of course there's a fact of the matter as to what day of the week it is! Likewise, in all problems involving simulations, there seems to be a fact of the matter whether you're the "real you" or the simulation, which leads us to talk about probabilities and "indexical uncertainty" as to which one is you.
At the core, Wei Dai's idea is to boldly proclaim that, counterintuitively, you can act as if there were no fact of the matter whether it's Monday or Tuesday when you wake up. Until you learn which it is, you think it's both. You're all your copies at once.
More formally, you have an initial distribution of "weights" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight. If you possess some useful information about the universe you're in, it's magically taken into account by the choice of "information set", because logically, your decision cannot affect the universes that contain copies of you with different states of knowledge, so they only add a constant term to the utility maximization.
Note that the theory, as described above, has ho notion of "truth" and "probability" divorced from decision-making. That's how I arrived at understanding it: in The Strong Occam's Razor I asked whether it makes sense to "believe" one physical theory over another which makes the same predictions. For example, is hurting a human in a sealed box morally equivalent to not hurting him? After all, the laws of physics could make a localized exception to save the human from harm. UDT gives a very definite answer: there's no fact of the matter as to which physical theory is "correct", but you refrain from pushing the button anyway, because it hurts the human more in universes with simpler physical laws, which have more weight according to our "initial" distribution. This is an attractive solution to the problem of the "implied invisible" - possibly even more attractive than Eliezer's own answer.
As you probably realize by now, UDT is a very sharp tool that can give simple-minded answers to all our decision-theory puzzles so far - even if they involve copying, amnesia, simulations, predictions and other tricks that throw off our approximate intuitions of "truth" and "probability". Wei Dai gave a detailed example in The Absent-Minded Driver, and the method carries over almost mechanically to other problems. For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall. Note that updating on the knowledge that you are in tails-universe (because Omega showed up) doesn't affect anything, because the theory is "updateless".
At this point some may be tempted to switch to True Believer mode. Please don't. Just like Bayesianism, utilitarianism, MWI or the Tegmark multiverse, UDT is an idea that's irresistibly delicious to a certain type of person who puts a high value on clarity. And they all play so well together that it can't be an accident! But what does it even mean to consider a theory "true" when it says that our primitive notion of "truth" isn't "true"? :-) Me, I just consider the idea very fruitful; I've been contributing new math to it and plan to do so in the future.
1) The challenge is not solving this individual problem, but creating a general theory that happens to solve this special case automatically. Our current formalizations of UDT fail on ASP - they have no concept of "stop thinking".
2) No, I mean the game where two players write each a sum of money on a piece of paper, if the total is over $10 then both get nothing, otherwise each player gets the sum they wrote.
3) Yeah, the $1 is independent.
Okay.
So, the superintelligent UDT agent can essentially see through both boxes (whether it wants to or not... or, rather, has no concept of not wanting to). Sorry if this is a stupid question, but wouldn't UDT one-box anyway, whether the box is empty or contains $1,000,000, for the same reason that it pays in Counterfactual Mugging and Parfit's Hitchhiker? When the box is empty, it takes the empty box so that there will be possible worlds where the box is not empty (as it would pay the counterfactual mugger so that it will get $10,000 in the other half of worlds), and when the box is not empty, it takes only the one box (despite seeing the extra money in the other box) so that the world it's in will weigh 50% rather than 0% (as it would pay the driver in Parfit's Hitchhiker, despite it having "already happened", so that the worlds in which the hitchhiker gives it a ride in the first place will weigh 100% rather than 0%).