Scott Aaronson has a new 85 page essay up, titled "The Ghost in the Quantum Turing Machine". (Abstract here.) In Section 2.11 (Singulatarianism) he explicitly mentions Eliezer as an influence. But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus. Among other things, he suggests that a crucial qualitative difference between a person and a digital upload is that the laws of physics prohibit making perfect copies of a person. Personally, I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read, and this is a good excuse to read about things like (I quote the abstract) "the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption". This is not just a shopping list of buzzwords, these are all important components of the author's main argument. It unfortunately still seems weak to me, but the time spent reading it is not wasted at all.
The main disagreement between Aaronson's idea and LW ideas seems to be this:
If any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions” about personal identity and free will would start to have practical consequences. Should you fax yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science.
(...)
As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers. So that’s a possibility that this essay explores at some length. To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re not copyable. The universe has never shown any particular tendency to cater to human philosophical prejudices! But I’d say the difficulties provide more than enough reason to care about the copyability question.
LW mostly prefers to bite the bullet on such questions, by using tools such as UDT. I'd be really curious to see Aaronson's response to Wei's UDT post.
As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers.
Even if Aaronson's speculation that human minds are not copyable turns out to be correct, that doesn't rule out copyable minds being built in the future, either de novo AIs or what he (on page 58) calls "mockups" of human minds that are functionally close enough to the originals to fool their close friends. The philosophical problems with copyable minds will still be an issue for those minds, and therefore minds not being copyable can't be the only hope of avoiding these difficulties.
To put this another way, suppose Aaronson definitively shows that according to quantum physics, minds of biological humans can't be copied exactly. But how does he know that he is actually one of the original biological humans, and not for example a "mockup" living inside a digital simulation, and hence copyable? I think that is reason enough for him to directly attack the philosophical problems associated with copyable minds instead of trying to dodge them.
Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we could copy a program, trace its execution, predict its outputs using an abacus, run the program backwards, in heavily-encrypted form, in one branch of a quantum computation, at one step per millennium, etc. etc., were to count as reductios that there's probably nothing that it's like to be that program --- or at any rate, nothing comprehensible to beings such as us?
Again, I certainly don't know that this is a reasonable way to think. I myself would probably have ridiculed it, before I realized that various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles. So I decided that, given the immense perplexities associated with copyable minds (which you know as well as anyone), the possibility that uncopyability is essential to our subjective experience was at least worth trying to "steelman" (a term I learned here) to see how far I could get with it. So, that's what I tried to do in the essay.
But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates?
If that turns out to be the case, I don't think it would much diminish either my intellectual curiosity about how problems associated with mind copying ought to be solved nor the practical importance of solving such problems (to help prepare for a future where most minds will probably be copyable, even if my own isn't).
various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles
It seems likely that in the future we'll be able to build minds that are very human-like, but copyable. For example we could take someone's gene sequence, put them inside a virtual embryo inside a digital simulation, let it grow into an infant and then raise it in a virtual environment similar to a biological human child's. I'm assuming that you don't dispute this will be possible (at least in principle), but are saying that such a mind might not have the same kind of subjective experience as we do. Correct?
Now suppose we built such a mind using your genes, and gave it an upbringing and education similar to yours. Wouldn't you then expect it to be puzzled by all the things that you mentioned above, except it would have to solves those puzzles in some way other than by saying "I can get around these confusions if I'm not copyable"? Doesn't that suggest to you that there have to be solutions to those puzzles that do not involve "I'm not copyable" and therefore the existence of the puzzles shouldn't have beckoned you in the direction of thinking that you're uncopyable?
So I decided that, given the immense perplexities associated with copyable minds (which you know as well as anyone), the possibility that uncopyability is essential to our subjective experience was at least worth trying to "steelman" (a term I learned here) to see how far I could get with it.
If you (or somebody) eventually succeed in showing that uncopyability is essential to our subjective experience, that would mean that by introspecting on the quality of our subjective experience, we would be able to determine whether or not we are copyable, right? Suppose we take a copyable mind (such as the virtual Scott Aaronson clone mentioned above), make another copy of it, then turn one of the two copies into an uncopyable mind by introducing some freebits into it. Do you think these minds would be able to accurately report whether they are copyable, and if so, by what plausible mechanism?
I really don't like the term "LW consensus" (isn't there a LW post about how you should separate out bundles of ideas and consider them separately because there's no reason to expect the truth of one idea in a bundle to correlate strongly with the truth of the others? If there isn't, there should be). I've been using "LW memeplex" instead to emphasize that these ideas have been bundled together for not necessarily systematically good reasons.
I think that last paragraph you quote needs the following extra bit of context:
To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re not copyable. The universe has never shown any particular tendency to cater to human philosophical prejudices! But I’d say the difficulties provide more than enough reason to care about the copyability question.
... because otherwise it looks as if Aaronson is saying something really silly, which he isn't.
If we could fax ourselves to Mars, or undergo uploading, then still wonder whether we're still "us" -- the same as we wonder now when such capabilities are just theoretical/hypotheticals -- that should count as a strong indication that such questions are not very practically relevant, contrary to Aaronson's assertion. Surely we'd need some legal rules, but the basis for those wouldn't be much different than any basis we have now -- we'd still be none the wiser about what identity means, even standing around with our clones.
For example, if we were to wonder about a question of "what effect will a foom-able AI have on our civilization", surely asking after the fact would yield different answers to asking before. With copies / uploads etc., you and your perfect copy could hold a meeting contemplating who stays married with the wife, and still start from the same basis with the same difficulty of finding the "true" answer as if you'd discussed the topic with a pal roleplaying your clone, in the present time.
This paper has some useful comments on methodology that seem relevant to some recent criticism of MIRI's recent research, e.g. the discussion in Section 2.2 about replacing questions with other questions, which is arguably what both the Löb paper and the prisoner's dilemma paper do.
In particular:
whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.
Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks — each of these advances addressed questions that could rightly have been called “philosophical” before the advance was made. And after each advance, there was still plenty for philosophers to debate about truth and provability and infinity, space and time and causality, probability and information and life and mind. But crucially, it seems to me that the technical advances transformed the philosophical discussion as philosophical discussion itself rarely transforms it! And therefore, if such advances don’t count as “philosophical progress,” then it’s not clear that anything should.
Appropriately for this essay, perhaps the best precedent for my bait-and-switch is the Turing Test... with legendary abruptness, Turing simply replaced the original question by a different one: “Are there imaginable digital computers which would do well in the imitation game?”...
...The claim is not that the new question, about the imitation game, is identical to the original question about machine intelligence. The claim, rather, is that the new question is a worthy candidate for what we should have asked or meant to have asked, if our goal was to learn something new rather than endlessly debating definitions. [Luke adds: I'm reminded of Dennett's quip that "Philosophy... is what you have to do until you figure out what questions you should have been asking in the first place."] In math and science, the process of revising one’s original question is often the core of a research project, with the actual answering of the revised question being the relatively easy part!
A good replacement question Q′ should satisfy two properties:
(a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q.
(b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.
The Turing Test, I think, captured people’s imaginations precisely because it succeeded so well at (a) and (b). Let me put it this way: if a digital computer were built that aced the imitation game, then it’s hard to see what more science could possibly say in support of machine intelligence being possible. Conversely, if digital computers were proved unable to win the imitation game, then it’s hard to see what more science could say in support of machine intelligence not being possible. Either way, though, we’re no longer “slashing air,” trying to pin down the true meanings of words like “machine” and “think”: we’ve hit the relatively-solid ground of a science and engineering problem. Now if we want to go further we need to dig (that is, do research in cognitive science, machine learning, etc). This digging might take centuries of backbreaking work; we have no idea if we’ll ever reach the bottom. But at least it’s something humans know how to do and have done before. Just as important, diggers (unlike air-slashers) tend to uncover countless treasures besides the ones they were looking for.
whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Yes, this is what modern causal inference did (I suppose by taking Hume's counterfactual definition of causation, and various people's efforts to deal with confounding/incompatability in data analysis as starting points).
I'm not a perfect copy of myself from one moment to the next, so I just don't see the force of his objection.
Fundamentally, those willing to teleport themselves will and those unwilling won't. Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive. Practically, it will be convenient for both the teleporters and the nonteleporters to treat the teleporters as if they have continuous identity.
"Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive."
I should clarify that I see no special philosophical problem with teleportation that necessarily destroys the original copy, as quantum teleportation would (see the end of Section 3.2). As you suggest, that strikes me as hardly more perplexing than someone's boarding a plane at Newark and getting off at LAX.
For me, all the difficulties arise when we imagine that the teleportation would leave the original copy intact, so that the "new" and "original" copies could then interact with each other, and you'd face conundrums like whether "you" will experience pain if you shoot your teleported doppelganger. This sort of issue simply doesn't arise with the traditional problem of intertemporal identity, unless of course we posit closed timelike curves.
Sometimes you don't need copying to get a tricky decision problem, amnesia or invisible coinflips are enough. For example, we have the Sleeping Beauty problem, the Absent-Minded Driver which is a good test case for LW ideas, or Psy-Kosh's problem which doesn't even need amnesia.
I tend to see Knightian unpredictability as a necessary condition for free will
But it's not. (In the link, I use fiction to defang the bugbear and break the intuition pumps associating prediction and unfreedom.) ETA: Aaronson writes
even if Alice can’t tell Bob what he’s going to do, it’s easy enough for her to demonstrate to him afterwards that she knew.
But that's not a problem for Bob's freedom or free will, even if Bob finds it annoying. That's the point of my story.
"Knightian freedom" is a misnomer, in something like the way "a wine margarita" is. Except that the latter at least contains alcohol, something one usually wants from a margarita. Sometimes it's good to be predictable (coordinating with friends); sometimes it's bad (facing enemies). But at no time is it crucial to freedom. Prediction isn't control.
None of this is to deny the potential interest of Aaronson's arguments regarding the feasibility of brain scanning, etc. But calling this Knightian unpredictability "free will" just confuses both issues.
"But calling this Knightian unpredictability 'free will' just confuses both issues."
torekp, a quick clarification: I never DO identify Knightian unpredictability with "free will" in the essay. On the contrary, precisely because "free will" has too many overloaded meanings, I make a point of separating out what I'm talking about, and of referring to it as "freedom," "Knightian freedom," or "Knightian unpredictability," but never free will.
On the other hand, I also offer arguments for why I think unpredictability IS at least indirectly relevant to what most people want to know about when they discuss "free will" -- in much the same way that intelligent behavior (e.g., passing the Turing Test) is relevant to what people want to know about when they discuss consciousness. It's not that I'm unaware of the arguments that there's no connection whatsoever between the two; it's just that I disagree with them!
Sorry about misrepresenting you. I should have said "associating it with free will" instead of "calling it free will". I do think the association is a mistake. Admittedly it fits with a long tradition, in theology especially, of seeing freedom of action as being mutually exclusive with causal determination. It's just that the tradition is a mistake. Probably a motivated one (it conveniently gets a deity off the hook for creating and raising such badly behaved "children").
Well, all I can say is that "getting a deity off the hook" couldn't possibly be further from my motives! :-) For the record, I see no evidence for a deity anything like that of conventional religions, and I see enormous evidence that such a deity would have to be pretty morally monstrous if it did exist. (I like the Yiddish proverb: "If God lived on earth, people would break His windows.") I'm guessing this isn't a hard sell here on LW.
Furthermore, for me the theodicy problem isn't even really connected to free will. As Dostoyevsky pointed out, even if there is indeterminist free will, you would still hope that a loving deity would install some "safety bumpers," so that people could choose to do somewhat bad things (like stealing hubcaps), but would be prevented from doing really, really bad ones (like mass-murdering children).
One last clarification: the whole point of my perspective is that I don't have to care about so-called "causal determination"---either the theistic kind or the scientific kind---until and unless it gets cashed out into actual predictions! (See Sec. 2.6.)
A better summary of Aaronson's paper:
I want to know:
Were Bohr and Compton right or weren’t they? Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?
EY is mentioned once, for his work in popularizing cryonics, and not for anything fundamental to the paper. Several other LW luminaries like Silas Barta and Jaan Tallinn show up in the acknowledgements.
If you have followed Aaronson at all in the past couple years, the new stuff begins around section 3.3, page 36. His definition of "freedom" is at first glance interesting, and may dovetail slightly with the standard reduction of free will.
Eh, I don't think I count as a luminary, but thanks :-)
Aaronson's crediting me is mostly due to our exchanges on the blog for his paper/class about philosophy and theoretical computer science.
One of them, about Newcomb's problem where my main criticisms were
a) he's overstating the level and kind of precision you would need when measuring a human for prediction; and
b) that the interesting philosophical implications of Newcomb's problem follow from already-achievable predictor accuracies.
The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)
Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.
In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:
"One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. ... I personally cannot believe that Nature would solve the problem of the 'transition between microfacts and macrofacts' in such a seemingly ad hoc way, a way that does so much violence to the clean rules of linear quantum mechanics."
Scott's personal belief calls to mind Nature's solution to the problem of gravitation; a solution that (historically) has been alternatively regarded as both "clean" or "unclean". His quantum beliefs map onto general relativity as follows:
General relativity is "unclean" "We can be confident that Nature will not do violence to the clean rules of linear Euclidean geometry; the notion is so repugnant that the ideas of general relativity CANNOT be correct."
as contrasted with
General relativity is "clean" "Matter tells space how to curve; space tells matter how to move; this principle is so natural and elegant that general relativity MUST be correct!"
Of course, nowadays we are mathematically comfortable with the latter point-of-view, in which Hamiltonian dynamical flows are naturally associated to non-vanishing Lie derivatives
of metric structures
, that is
.
This same mathematical toolset allow us to frame the ongoing debate between Scott and his colleagues in mathematical terms, by focusing our attention not upon the metric structure , but similarly upon the complex structure
.
In this regard a striking feature of Scott's essay is that it provides precisely one numbered equation (perhaps this a deliberate echo of Stephen Hawking's A Brief History of Time, which also has precisely one equation?). Fortunately, this lack is admirably remedied by the discussion in Section 8.2 "Holomorphic Objects" of Andrei Moroianu's textbook Lectures on Kahler Geometry. See in particular the proof arguments that are associated to Moroianu's Lemma 8.7, which conveniently is freely available as Lemma 2.7 of an early draft of the textbook, that is available on the arxiv server as arXiv:math/0402223v1. Moroianu's draft textbook is short and good, and his completed textbook is longer and better!
Scott's aesthetic personal beliefs naturally join with Moroianu's mathematical toolset to yield a crucial question: Should/will 21st Century STEM researchers embrace with enthusiasm, or reject with disdain, dynamical theories in which ?
Scott's essay is entirely correct to remind us that this crucial question is (in our present state-of-knowledge) not susceptible to any definitively verifiable arguments from mathematics, physical science, or philosophy (although plenty of arguments from plausibility have been set forth). But on the other hand, students of STEM history will appreciate that the community of engineers has rendered a unanimous verdict: in essentially all modern large-scale quantum simulation codes (matrix product-state calculations provide a prominent example).
So to the extent that biological systems (including brains) are accurately and efficiently simulable by these emerging dynamic-J methods, then Scott's definition of quantum dynamical systems may have only marginal relevance to the practical understanding of brain dynamics (and it is plausible AFAICT that this proposition is entirely consonant with Scott's notion of "freebits").
Here too there is ample precedent in history: early 19th Century textbooks like Nathaniel Bowditch's renowned New American Practical Navigator (1807) succinctly presented the key mathematical elements of non-Euclidean geometry (many decades in advance of Gauss, Riemann, and Einstein).
Will 21st Century adventurers learn to navigate nonlinear quantum state-spaces with the same exhilaration that adventurers of earlier centuries learned to navigate first the Earth's nonlinear oceanography, and later the nonlinear geometry of near-earth space-time (via GPS satellites, for example)?
Conclusion Scott's essay is right to remind us: We don't know whether Nature's complex structure is comparably dynamic to Nature's metric structure, and finding out will be a great adventure! Fortunately (for young people especially) textbooks like Moroianu's provide a well-posed roadmap for helping mathematicians, scientists, engineers --- and philosophers too --- in setting forth upon this great adventure. Good!
That Aaronson mentions EY isn't exactly a surprise; the two shared a well-known discussion on AI and MWI several years ago. EY mentions it in the Sequences.