For FAI: Is "Molecular Nanotechnology" putting our best foot forward?

Molecular nanotechnology, or MNT for those of you who love acronyms, seems to be a fairly common trope on LW and related literature. It's not really clear to me why. In many of the examples of "How could AI's help us" or "How could AI's rise to power" phrases like "cracks protein folding" or "making a block of diamond is just as easy as making a block of coal" are thrown about in ways that make me very very uncomfortable. Maybe it's all true, maybe I'm just late to the transhumanist party and the obviousness of this information was with my invitation that got lost in the mail, but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.

I must post the disclaimer that I have done a little bit of materials science, so maybe I'm just annoyed that you're making me obsolete, but I don't see why this particular possible future gets so much attention. Let us assume that a smarter than human AI will be very difficult to control and represents a large positive or negative utility for the entirety of the human race. Even given that assumption, it's still not clear to me that MNT is a likely element of the future. It isn't clear to me than MNT is physically practical. I don't doubt that it can be done. I don't doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that's my day job, but I have a hard time believing the only reason you can't make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we're just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it's an enormously steep, enormously complicated energy landscape, and the Schrodinger Equation scales very very poorly as you add additional particles and degrees of freedom. Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Maybe a super human intelligence is capable of doing so, but it's not at all clear to me that it's even possible.

I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details.  I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible. This is precisely where I have difficulty. I would estimate the probability of molecular nanotechnology (in the form of programmable replicators, grey goo, and the like) as lower than the probability of human or super human level AI. I can think of all sorts of objection to the former, but very few objections to the latter. Including MNT as a consequence of AI, especially including it without addressing any of the fundamental difficulties of MNT, I would argue harms the credibility of AI researchers. It makes me nervous about sharing FAI literature with people I work with, and it continues to bother me. 

I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess. I don't think convincing people that smarter than human AI's have enormous potential for good and evil is particularly difficult, once you can get them to concede that smarter than human AIs are possible. I do think that waving your hands and saying super-intelligence at things that may be physically impossible makes the whole endeavor seem less serious. If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.

Put in LW parlance, suggesting things not known to be possible by modern physics without detailed explanations puts you in the reference class "people on the internet who have their own ideas about physics". It didn't help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.

And maybe it's just me. Maybe this did not bother anyone else, and it's an incredible shortcut for getting people to realize just how different a future a greater than human intelligence makes possible and there is no better example. It does alarm me though, because I think that physicists and the kind of people who notice and get uncomfortable when you start invoking magic in your explanations may be the kind of people FAI is trying to attract.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:34 PM
Select new highlight date
All comments loaded

I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader.

I agree with this.

What do you agree with? For example, I agree that it could, hypothetically, resort to such conventional methods (just as it could, hypothetically, paint the Moon in yellow), but I don't think it's likely. Do you mean that you think it's likely (or not unlikely etc.)?

Specifically, with the claim that bringing up MNT is unnecessary, both in the "burdensome detail" sense and "needlessly science-fictional and likely to trigger absurdity heuristics" sense.

Isn't life an example of self-assembling molecular nanotechnology? If life exists, then our physics allows for programmable systems which use similar processes.

We already have turing complete molecular computers... but they're currently too slow and expensive for practical use. I predict self-assembling nanotech programmed with a library of robust modular components will happen long before strong AI.

Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas). That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it's doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses or protein crystals everything is built more or less according to a recipe of interacting forces and emergent behavior (rather than having something like a digital blueprint).

So yeah, remarkable things are possible, most likely even including things that naturally-evolved life does not do now. But there are limits and it probably does not resemble the sorts of things described in "Nanosystems" and its ilk at all.

a template of the sorts of things that are actually possible

Was this true at the macroscale too? The jet flying over my head says "no". Artificial designs can have different goals than living systems, and are not constrained by the need to evolve via a nearly-continuous path of incremental fitness improvements from abiogenesis-capable ancestor molecules, and this turned out to make a huge difference in what was possible.

I'm also skeptical about the extent of what may be possible, but your examples don't really add to that skepticism. Two examples (systems that evolved from random mutations don't have ECC to prevent random mutations; systems that evolved from aquatic origins do most of their work in aqueous solution) are actually reasons for expecting a wider range of possibilities in designed vs evolved systems; one (dynamic systems may not be statically stable) is true at the macroscale too, and one (genetic code is vastly less transparent than computer code) is a reason to expect MNT to involve very difficult problems, but not necessary a reason to expect very underwhelming solutions.

Biology didn't evolve to take advantage of ridiculously concentrated energy sources like fossil petroleum, or to major industrial infrastructure, two things that make jets possible. This is similar to some of the reasons I think that synthetic molecular technology will probably be capable of things that biology isn't, by taking advantage of say electricity as an energy source or one-off batch synthesis of stuff by bringing together systems that are not self-replicating from parts made separately.

In fact the analogy of a bird to a jet might be apt to describe the differences between what a synthetic system could do and what biological systems do now, due to them using different energy sources and non-self replicating components (though it might be a lot harder to brute-force such a change in quantitative performance by ridiculous application of huge amounts of energy at low efficiency).

I still suspect, however, that when you are looking at the sorts of reactions that can be done and patterns that can be made in quantities that matter as more than curiosities or rare expensive fragile demonstrations, you will be dealing with more statistical reactions than precise engineering and dynamic systems rather than static (at least during the building process) just because of the nature of matter at this scale.

edited for formatting

Standard reference: Nanosystems. In quite amazing detail, though the first couple of chapters online don't begin to convey it.

but seeing all the physics swept under the rug

There's lots and lots of physics. All of this discussion has already been done.

While this may be a settled point in your mind, it is not in general a settled point in the mind of your audience. Inasmuch as you're trying to convince other people of your beliefs, it's best to meet them where they are, and not ask them to suspend their sense of disbelief in directions that are more or less orthogonal to your primary argument.

MNT is not widespread in the meme pool. Inasmuch as FAI assumes or appears to rely on MNT, it will pay a fitness cost in individuals who do not subscribe to the MNT meme.

Now maybe FAI is particularly convincing to people who already have the MNT meme, and including MNT in possible FAI futures gives it a huge fitness advantage in the "already believes MNT" subpopulation. Maybe the trade-off for FAI of reduced fitness in the meme pool at large (or the computational-materials-scientist meme-pool) is worth it in exchange for increased fitness in the transhumanist meme pool. I don't know. I certainly haven't done nearly the work publicizing FAI that you have, and obviously you have some idea of what you're doing. I'm not trying to argue that it should be taken out, or never used as an example again. I will say that I hope you take this post/argument as weak counter-evidence on the effectiveness of this particular example, and update accordingly.

Eliezer linked to the Drexler book and dissertation and he probably trusts the physics in it. If you claim that the physics of nanotech is much harder than what is described there, then you better engage the technical arguments in the book, one by one, and believably show where the weaknesses lie. That's how you "unsettle" the settled points. Simply offering a contradictory opinion is not going to cut it, as you are going to lose the status contest.

Eliezer linked to the Drexler book and dissertation and he probably trusts the physics in it.

Given the unfathomably positive reception of the grandparent allow me to quote shminux's reply for support and emphasis.

The opening post took the stance "but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.". Eliezer provided a reference to a standard physics resource that explains the physics and provides better arguments than Eliezer could hope to supply (without an unrealistic amount of retraining and detouring from his primary objective.) The response was to sweep the physics under the rug and move to "you have to meet me where I am". Unsurprisingly, this sets off every crackpot alarm I have.

If you claim that the physics of nanotech is much harder than what is described there, then you better engage the technical arguments in the book, one by one, and believably show where the weaknesses lie. That's how you "unsettle" the settled points. Simply offering a contradictory opinion is not going to cut it, as you are going to lose the status contest.

As an alternative to personally engaging in the technical arguments at the very least he could reply with reference to another authoritative source such as another textbook or several people with white hair and letters before their name. That sort of thing can support a position of "that science is disputed" or, if the hair is sufficiently white and the institutional affiliations particularly prominent it could potentially even support "Drexler is a crackpot too!". But given that the dissertation in question was for MIT that degree of mainstream contempt seems unlikely.

I will be happy to engage Drexler at length when I get the chance to do so. I have not, in the last 3 days, managed to buy the book and go through the physics in detail. I hope that failure is not enough to condemn me as not acting in good faith. I made it through the first couple chapters of the dissertation, but it read like a dissertation, which is to say lots of tables and not much succinct reasoning that I could easily prove or disprove. There seemed to be little point in linking to "expert rebuttals" because presumably these would not be new information, though Richard Smalley is the canonical white haired Nobel Laureate who disagrees strongly with the idea of MNT as Drexler outlines it.

This post was not intended primarily as a discussion on whether MNT was true or not. If people consider that an important discussion, I'll be happy to participate in it and lend whatever expertise I may or may not have. I'll be happy to buy Nanosystems and walk us all through as much quantum mechanics as anyone could ever want. This was emphatically not my point however. I don't have a strong opinion on whether MNT is true. I will freely admit to not having personally done the research necessary to come to a confident conclusion one way or the other. I am confident that it's controversial. It's not something one hears mentioned in materials science seminars, it doesn't win you any grants, you wouldn't put it in a paper. While it may still be true, I don't think it's well-established enough that it's the sort of truth you can take for granted.

I personally would not, when giving an explanation for some phenomenon, ask you to take for granted without at least a citation the following statement. "The ground state energy of a system of atoms can be determined exactly without knowing anything about the wave function of the system and without knowing the wave functions of the individual electrons." I would not expect anyone reading that statement to be able to evaluate its truth or falsehood without a considerable diversion of energy. I would anticipate that patient readers would be confused, and some people might give up reading altogether because I was stating as fact things they had no good way of verifying.

However, the Hohenberg-Kohn theorems are demonstrably true, and have been around for 50 years. That doesn't make them obvious. If I skip a step in a proof or derivation, it doesn't make the proof wrong, but it is going to make people who care about the math very uncomfortable. When one publishes rigorous technical writing, the goal is precisely to make the inferential gaps as small as possible, to lead your skeptical untrusting readers forcefully to a conclusion, without ever confusing them as to how you got from A to B, or opening the door to other explanations.

I will be happy to engage Drexler at length when I get the chance to do so. I have not, in the last 3 days, managed to buy the book and go through the physics in detail. I hope that failure is not enough to condemn me as not acting in good faith.

Absolutely not, and I think this occasioned a useful discussion. But if you have a physics or chemistry background, I for one would greatly appreciate it if you did so (and the Smalley critique, and perhaps Locklin below) and posted your take. Also you don't need to buy the book, you should be able to get a copy at any large university library.

Richard Smalley is the canonical white haired Nobel Laureate who disagrees strongly with the idea of MNT as Drexler outlines it.

I am no expert in the relevant science, but I take the Smalley argument from authority with a grain of salt, for two reasons.

First, according to wikipedia Smalley was a creationist, and apparently he endorsed an Intelligent Design book, saying the following:

Evolution has just been dealt its death blow. After reading Origins of Life with my background in chemistry and physics, it is clear that biological evolution could not have occurred.

If he underestimated the ability of evolution to create complex molecular machines, perhaps he did the same about human engineering.

Also, the National Academy of Sciences, in its 2006 report on nanotechnology, discussed Drexler's ideas and did not take Smalley's critique to be decisive (not a ringing endorsement either, of course, suggesting further experimental research). Here is a page with the relevant sections.

This critique by Scott Locklin seems mainly to be arguing that Drexler was engaged in premature speculation that was not a useful contribution to science or engineering, and has not borne useful fruit. But he also attacks nuclear fusion, cancer research, and quantum computing (as technology funding target) for premature white elephant status, which seem like good company to be in for speculative future technology.

He says that there may be technologies with similar capabilities to those Drexler envisions eventually, but that Drexler has not contributed to realizing them, and suggests that Drexler made serious physics errors (but isn't very clear about what they are).

I would be interested in knowing about the technological limits, separately from whether they will be reached anytime soon, and whether Drexler's contributions were any good for science or engineering..

I will be happy to engage Drexler at length when I get the chance to do so. I have not, in the last 3 days, >>managed to buy the book and go through the physics in detail. I hope that failure is not enough to condemn me as not acting in good faith.

Absolutely not, and I think this occasioned a useful discussion. But if you have a physics or chemistry >background, I for one would greatly appreciate it if you did so (and the Smalley critique, and perhaps >Locklin below) and posted your take. Also you don't need to buy the book, you should be able to get a copy >at any large university library.

Okay. I'll try and do this. I'm mildly qualified; I'm finishing up a Ph.D. in computational materials science. It will take me a little while to make time for it, but it should be fun! Anyone else who is interested in seeing this discussion feel free to encourage me/let me know.

I would love to see a critique that started "On page W of X, Drexler proposes Y, but this won't work because Z". Smalley made up a proposal that Drexler didn't make ("fat fingers") and critiqued that. If there's a specific design in Nanosystems that won't work, that would be very informative.

I would very much like to see this. Sounds like another discussion-level post would be in order.

Also I have some worries about the pattern "X is unsupported! What, you have massive support for X? Well talking about X is still bad publicity, really I'm concerned for how this makes you look in front of other people." I'll consider an 'oops, I retract my previous argument, but...' followed by that shift, but not without the 'oops'. Otherwise I do update on X possibly being bad publicity, but not in a being-persuaded way, more of an okay-I've-observed-you way.

I don't consider Drexler's work to be "massive support" for MNT. I think that MNT is controversial. I think that one shouldn't introduce controversial material in a discussion unless you absolutely have to for some of the same reasons I think that Nixon being a Quaker and Republican is a bad example.

I honestly wasn't sure when I posted this whether anyone else here would feel the same way about MNT being non-obvious and controversial. It does seem safe to say that if MNT is controversial on LW, which is overwhelmingly sympathetic to transhumanist ideas, then it's probably even less popular outside of explicitly transhumanist communities.

I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess.

Your argument is extremely human-parochial. You seem to be thinking of AIs as potential supervillains who want to "rule the world," (where ruling the world = controlling humans.) If you think that an AI would care about controlling humans, you are assuming that the AI would be very human-like. In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.

In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans. So of course, to you manipulating humans seems vastly easier and more useful than building MNT or macro-sized robots, or whatever.

I'm not assuming that the AI has a large final preference for controlling humans. I am stressing how the AI interacts with humans because as a human that's of particular concern to me. Access to human resource may also be a useful instrumental goal for a "young" AI, as human beings control fairly large amount of resources and gaining access to them may be the easiest route to power for an AI. My understanding is that in the context of FAI, we're discussing AI in terms of what it means from humans, so that's where I'm placing the emphasis. The discussion of how the AI gains resources/global control is valid even if the AI's end game is tiling the universe in paperclips.

The question of whether an AI is likely to have more difficulty understanding humans or quantum mechanics is interesting. As a possible counterpoint, I would say that an AI programmed by human beings is likely to be close to human style thought in the space of all possible minds, so the vastness of mind space is perhaps not totally relevant. I'm not clear as to whether that's a particularly good counterpoint.

I don't have a problem with the AI building an army of macrosize robots, or taking over the internet, or whatever. I don't think human society is well-designed, or is even capable of being well-designed, with respect to significantly slowing down an AI trying to convert us all into resources. Indeed, it seems to me that any number of possible path require fewer assumptions and less computational time than MNT. The essence of my complaint is that it seems like of the many possible paths to power for an AI, the one that gets stressed in FAI literature in on the less likely end of the spectrum, and I'm really confused as to why that choice has been made.

In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.

An AGI cares about not being killed by humans.

In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans.

Corn manipulates humans to kill parasites that might damage the corn by a variety of ways. An entity doesn't need to be smart to be engaged in manipulating humans.

As long as humans have the kind of power over our world that they have at the moment and AGI will either be skilled in dealing with humans or humans will shut it down if there seems to be a danger of the AGI amazing power and not caring about humans.

For some reason no one wants to hold Eric Drexler accountable now for the grandiose, irresponsible and frankly cringe-worthy things he wrote back in the 1980's.

Case in point. I turned 27 in 1986, the year Drexler published Engines of Creation, so I belong to the generation referred to in the following speculation:

http://e-drexler.com/d/06/00/EOC/EOC_Chapter_8.html

Imagine someone who is now thirty years old [in 1986]. In another thirty years, biotechnology will have advanced greatly, yet that thirty-year-old will be only sixty. Statistical tables which assume no advances in medicine say that a thirty-year-old U.S. citizen can now expect to live almost fifty more years - that is, well into the 2030s. Fairly routine advances (of sorts demonstrated in animals) seem likely to add years, perhaps decades, to life by 2030. The mere beginnings of cell repair technology might extend life by several decades. In short, the medicine of 2010, 2020, and 2030 seems likely to extend our thirty-year-old's life into the 2040s and 2050s. By then, if not before, medical advances may permit actual rejuvenation. Thus, those under thirty (and perhaps those substantially older) can look forward - at least tentatively - to medicine's overtaking their aging process and delivering them safely to an era of cell repair, vigor, and indefinite life-span.

I turn 54 this November, and I can assure you that no one in my generation has seen "medicine's overtaking their aging process."

Yet many cryonicists have bet their futures on this fantasy technology, when regular people can see that it has taken on the characteristics of an apocalyptic religious belief instead of a rational assessment of future capabilities. Cryonicist Thomas Donaldson warned that this would happen and not help cryonics' credibility, back around the time Drexler predicted that I would start to grow younger by now.

Apparently Drexler wants to reboot his reputation with a new book, but someone needs to remind people about the things he promised us in his 1980's-era writings which haven't come to pass.

I'm commenting a few days after the main flurry of discussion and just wanted to raise a concern about how there seems to be a conflation in the OP and in many of the comments between (1) effective political advocacy among ignorant people who will stick with the results that fall out of the absurdity heuristic even when it gives false results and (2) truth seeking analysis based on detailed mechanistic considerations of how the world is likely to work.

Consider the 2x2 grid where, on one axis, we're working in either an epistemically unhygienic advocacy frame where its OK to say false things that get people to support the right conclusion or policy (versus a truth seeking frame where you grind from the facts to the conclusion with high quality reasoning processes at each stage for the sake of figuring stuff out from scratch) and on the second axis Leplen's dismissal of MNT is coherently founded and on the right track (versus it just being a misfiring absurdity heuristic).

I think in this forum it can be generally assumed that "FAI is important" as the background conclusion that is also a message that it is probably beneficial to advocate on behalf of.

If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.

Leplen's claim here is a claim about Leplen's historically contingent reasoning processes rather than about the object level workabilty of MNT and it is raised as though Leplen is a fairly normal person whose historically likely reaction to MNT is common enough to be indicative of how it will play with many other people. So the part of the 2x2 grid it is from is firmly "advocacy rather than truth" and mostly assuming "Leplen's reaction is justified". I think it is worth spelling out what it would look like to explore the other three boxes in the 2x2 grid.

If we retain the FAI-promoting advocacy perspective but imagine that Leplen is wrong because "MNT magic" is actually something future scientists or an AGI could pull together and deploy, then the substantive cost to the world might be that courses of action that are important if MNT is a real concern may not be well addressed by the group of people who may have been mobilized by a "just FAI, not MNT" advocacy. If basically the same AGI-safety strategy is appropriate whether or not an AGI would head towards MNT as a lower bound on the speed and power of the weapons it could invent, then dropping MNT from the advocacy can't really harm anything. If the appropriate policies are different enough that lots of people convinced of "FAI without MNT" would object to "FAI with MNT" protection measures, then dropping advocacy could be net harmful to the world.

If we retain the idea that Leplen's dismissal of MNT is coherent and justified, but flip over to a truth-seeking frame (while retaining awareness of a background belief by many old time LWers that MNT is probably important to think about) then the arguments offered to help actually change people's minds for coherent reasons seem lacking. From a truth seeking perspective it doesn't matter what turns people on or off if their opinions aren't themselves important indicators of how the world actually is. The only formal credential offered is in materials science, and this is raised from within an activist advocacy frame where Leplen admits that motivated cognition could account for their attitude with respect to MNT out of defensiveness and a desire to not have skills become obsolete. Lots of people don't want to become obsolete, so this is useful evidence for figuring out how to convince similarly fearful people of a conclusion about the importance of FAI by dropping other things that might make FAI advocacy harder. But the claim that "MNT is unimportant based on object level science considerations" will be mostly unmoved by the advocacy level arguments here if someone already has chemistry experience, and has read Nanosystems, and still thinks MNT matters. Something else would need to be offered than hand waving and a report about emotional antibodies to a certain topic. So presuming that Leplen's dismissal of MNT is on track, and that many LWers think MNT is important, it seems like there's an education gap, where the LW mainstream could be significantly helped by learning the object level reasoning that justify Leplen's dismissal of MNT. Like where (presuming that it went off the rails somewhere) did Nanosystems go off the rails?

The fourth and final box of the 2x2 grid is for wondering what things would look like if we were in a truth seeking and communal learning mode (not worried about advocacy among random people) and Leplen was wrong to dismiss MNT. In this mode the admixture of advocacy and truth while taking Leplen seriously seems pretty bad because the very local educational process this week on this website would be going awry. It is understandable that Leplen's reaction is relevant to one of LW's central advocacy issues and Leplen seems is friendly to that project... and yet from the perspective of an attempt to build community knowledge in the direction of taking serious things seriously and believing true things for good reasons while disbelieving false things when the evidence pushes that way... the conflation is mildly disturbing.

Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded.

This is a bad argument. Like it doesn't even taken into account the distinctions between bootstrapping from scratch to a single working general assembler versus how it would work assuming the key atoms could be put into the right places once (like whether and how expensively it could build copies of itself). The "bootstrap difficulty" question and the "mature scaleout" questions are different questions and our discussion seems to be papering over the distinctions. The badness of this argument was gently pointed out by drethelin, but somehow not in a way that was highly upvoted, I suspect because it didn't take the (probably?) praiseworthy advocacy concerns into account.

To be clear, I'm friendly to the idea that MNT might not be physically possible, or if possible it might not be efficient. I'm not a huge expert here at all and would like to be better educated on the subject. And I'm friendly to the idea of designing AGI advocacy messages that gain traction and motivate people to do things that actually improve the world. I'm just trying to point out that mixing both of these concerns into the same rhetorical ball, seems to do a disservice to both...

Which is pretty ironic, considering that "mixing FAI and MNT together seems politically problematic" seems to be the general claim of the article. Mostly I guess I'm just trying to say that this article is even more complicated because now instead of sometimes doping the FAI discussions with MNT, we're fully admixing FAI and MNT and political advocacy.

It is possible to have expert experience in chemistry and to find MNT preposterous for reasons derived from that experience. In fact, it's a common reaction; not totally universal, but very common. And the second quote from leplen sums up why, quite nicely and accurately. Even if one trusts the calculations in Nanosystems regarding the stability of the various structures on display there, they will still look like complete fantasy to someone used to ordinary methods of chemical synthesis, which really do resemble "shaking a large bin of lego in a particular way while blindfolded"!

Nanosystems itself won't do much to convince someone who thinks that assembly is the main barrier to the existence of such structures. Maybe subsequent papers by Merkle and Freitas would help a little. They argue that you could store HCCH in the interior of nanotubes as a supply of carbons, which can then be extracted, manipulated, and put into place - if you work with great delicacy and precision...

But it is a highly nontrivial assertion, that positional control of small groups of atoms, such as one sees in enzymatic reactions, can be extended so far as to allow the synthesis of diamond through atom-stacking by nanomechanisms. Chemists have a right to be skeptical about that, and if they run across an intellectual community where people blithely talk of an AI ordering a few enzymes in the mail and then quickly bootstrapping its way to possession of a world-eating nanobot army, then they really do have a reason to think that there might be crackpots thereabouts; or, more charitably, people who don't know the difference between science fiction and reality.

I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details.

This is unreasonably accusatory. I'm pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.

Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded.

Isn't this the argument creationists use against evolution? But more seriously, nature does nano-assembly constantly and with pretty remarkable precision in ways we have yet to be able to fully understand or control. This means that there's at the very least that much to learn about MNT that we're simply "not smart enough" to be able to understand yet. Consider fields like transfection, where you can buy some reagents and cells from Sigma or whoever and make them create your own custom proteins. This is far far in advance of what we could do 100 years ago but is arguably only a matter of being "smarter" and/or knowing more rather than anything else. Calcium Phosphate transfection doesn't even use novel chemicals, and yet was only discovered in 1973.

Nature does nano-assembly, but it isn't arbitrary nano-assembly.

My example of a very hard nano-assembly problem is a ham sandwich, with the hardest part being the lettuce. It's possible that the easiest way to make a lettuce leaf-- they still have live cells-- is to grow a head of lettuce.

Maybe the right question (ignoring where MNT fits with AI) is to look at what parts of MNT looks feasible at present levels of knowledge.

This is unreasonably accusatory. I'm pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.

Pointing out a possible mental bias isn't accusatory.

I read that phrase as implying MNT was consciously added to help convince others about FAI, not that it was an unconscious bias eg Eliezer had.

This is precisely what I meant. In some examples the line of reasoning "AI->MNT->we're all dead if it's not friendly" is specifically prefaced with the discussion that any detailed example is inherently less plausible, but adding the details is supposed to make it feel more believable. My whole argument is that I think this specific detail will backfire in the "making it feel more believable" department for someone who does not already believe in MNT and other transhumanist memes.

I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible

No, MNT is part of the discussion because it is taken for granted, along with cryonics, parallel quantum worlds, Dyson spheres, and various less spectacular ideas. You may want to see analogous complaints that I have previously made.