I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader.
I agree with this.
Specifically, with the claim that bringing up MNT is unnecessary, both in the "burdensome detail" sense and "needlessly science-fictional and likely to trigger absurdity heuristics" sense.
For some reason no one wants to hold Eric Drexler accountable now for the grandiose, irresponsible and frankly cringe-worthy things he wrote back in the 1980's.
Case in point. I turned 27 in 1986, the year Drexler published Engines of Creation, so I belong to the generation referred to in the following speculation:
http://e-drexler.com/d/06/00/EOC/EOC_Chapter_8.html
...Imagine someone who is now thirty years old [in 1986]. In another thirty years, biotechnology will have advanced greatly, yet that thirty-year-old will be only sixty. Statistical tables which assume no advances in medicine say that a thirty-year-old U.S. citizen can now expect to live almost fifty more years - that is, well into the 2030s. Fairly routine advances (of sorts demonstrated in animals) seem likely to add years, perhaps decades, to life by 2030. The mere beginnings of cell repair technology might extend life by several decades. In short, the medicine of 2010, 2020, and 2030 seems likely to extend our thirty-year-old's life into the 2040s and 2050s. By then, if not before, medical advances may permit actual rejuvenation. Thus, those under thirty (and perhaps those substantially older) can look forward - at lea
Isn't life an example of self-assembling molecular nanotechnology? If life exists, then our physics allows for programmable systems which use similar processes.
We already have turing complete molecular computers... but they're currently too slow and expensive for practical use. I predict self-assembling nanotech programmed with a library of robust modular components will happen long before strong AI.
Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas). That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it's doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses o...
a template of the sorts of things that are actually possible
Was this true at the macroscale too? The jet flying over my head says "no". Artificial designs can have different goals than living systems, and are not constrained by the need to evolve via a nearly-continuous path of incremental fitness improvements from abiogenesis-capable ancestor molecules, and this turned out to make a huge difference in what was possible.
I'm also skeptical about the extent of what may be possible, but your examples don't really add to that skepticism. Two examples (systems that evolved from random mutations don't have ECC to prevent random mutations; systems that evolved from aquatic origins do most of their work in aqueous solution) are actually reasons for expecting a wider range of possibilities in designed vs evolved systems; one (dynamic systems may not be statically stable) is true at the macroscale too, and one (genetic code is vastly less transparent than computer code) is a reason to expect MNT to involve very difficult problems, but not necessary a reason to expect very underwhelming solutions.
Biology didn't evolve to take advantage of ridiculously concentrated energy sources like fossil petroleum, or to major industrial infrastructure, two things that make jets possible. This is similar to some of the reasons I think that synthetic molecular technology will probably be capable of things that biology isn't, by taking advantage of say electricity as an energy source or one-off batch synthesis of stuff by bringing together systems that are not self-replicating from parts made separately.
In fact the analogy of a bird to a jet might be apt to describe the differences between what a synthetic system could do and what biological systems do now, due to them using different energy sources and non-self replicating components (though it might be a lot harder to brute-force such a change in quantitative performance by ridiculous application of huge amounts of energy at low efficiency).
I still suspect, however, that when you are looking at the sorts of reactions that can be done and patterns that can be made in quantities that matter as more than curiosities or rare expensive fragile demonstrations, you will be dealing with more statistical reactions than precise engineering and dynamic systems rather than static (at least during the building process) just because of the nature of matter at this scale.
edited for formatting
I don't seem to have the same disdain for the word 'emergent' as much of the population here. I don't use it as a curiosity stopper or in place of the word 'mysterious' - I wouldn't be much of a biologist if a little emergent behavior stopped me cold. (Also no argument about many modular things in biological systems, I pull out and manipulate pathways and regulatory circuits regularly in my work, but there is a whole lot which is still very context-dependent). In this context I used the word emergent to mean that rather than having some kind of map of the final structure embedded in the genetic instructions, you have them specifying the properties of small elements which then produce those larger structures only in the context of their interactions with each other and which produce a lot more structure than is actually encoded in the DNA via the rather-opaque 'decompression algorithm' of physics and chemistry (through which small simple changes to the elements can map to almost no change in the product or vast changes across multiple attributes). I've always found the analogy of genetics to a blueprint or algorithm tiresome and the analogy to a food recipe much more applicable;...
I'd say life is very near to as good as it gets in terms of moving around chemical energy and using it to transform materials without something like a furnace or a foundry. You're never going to eat rock, it's already in a pretty damn low energy state that you cannot use for energy. Lithotrophic bacteria take advantage of redox differences between materials in rocks and live REALLY slowly so that new materials can leech in. You need to apply external energy to it in order to transform it. And as TheOtherDave has said, major alterations have happened but according to rather non-grey-goo patterns, and I suspect that the sorts of large-scale (as opposed to a side-branch that some energy takes) reactions will be more similar to biological transformations than to other possibilities.
I do think that life is not necessarily as good as it gets in terms of production of interesting bulk materials or photosynthesis though because in both these cases we can take advantage of non-self-replicating (on its own) infrastructure to help things along. Imagine a tank in which electrodes coming from photovoltaics (hopefully made of something better than the current heavy-metal doped silicon that ...
Standard reference: Nanosystems. In quite amazing detail, though the first couple of chapters online don't begin to convey it.
but seeing all the physics swept under the rug
There's lots and lots of physics. All of this discussion has already been done.
While this may be a settled point in your mind, it is not in general a settled point in the mind of your audience. Inasmuch as you're trying to convince other people of your beliefs, it's best to meet them where they are, and not ask them to suspend their sense of disbelief in directions that are more or less orthogonal to your primary argument.
MNT is not widespread in the meme pool. Inasmuch as FAI assumes or appears to rely on MNT, it will pay a fitness cost in individuals who do not subscribe to the MNT meme.
Now maybe FAI is particularly convincing to people who already have the MNT meme, and including MNT in possible FAI futures gives it a huge fitness advantage in the "already believes MNT" subpopulation. Maybe the trade-off for FAI of reduced fitness in the meme pool at large (or the computational-materials-scientist meme-pool) is worth it in exchange for increased fitness in the transhumanist meme pool. I don't know. I certainly haven't done nearly the work publicizing FAI that you have, and obviously you have some idea of what you're doing. I'm not trying to argue that it should be taken out, or never used as an example again. I will say that I hope you take this post/argument as weak counter-evidence on the effectiveness of this particular example, and update accordingly.
Eliezer linked to the Drexler book and dissertation and he probably trusts the physics in it. If you claim that the physics of nanotech is much harder than what is described there, then you better engage the technical arguments in the book, one by one, and believably show where the weaknesses lie. That's how you "unsettle" the settled points. Simply offering a contradictory opinion is not going to cut it, as you are going to lose the status contest.
Eliezer linked to the Drexler book and dissertation and he probably trusts the physics in it.
Given the unfathomably positive reception of the grandparent allow me to quote shminux's reply for support and emphasis.
The opening post took the stance "but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.". Eliezer provided a reference to a standard physics resource that explains the physics and provides better arguments than Eliezer could hope to supply (without an unrealistic amount of retraining and detouring from his primary objective.) The response was to sweep the physics under the rug and move to "you have to meet me where I am". Unsurprisingly, this sets off every crackpot alarm I have.
If you claim that the physics of nanotech is much harder than what is described there, then you better engage the technical arguments in the book, one by one, and believably show where the weaknesses lie. That's how you "unsettle" the settled points. Simply offering a contradictory opinion is not going to cut it, as you are going to lose the status contest.
As an alternative to personally engaging in the technical argume...
I will be happy to engage Drexler at length when I get the chance to do so. I have not, in the last 3 days, managed to buy the book and go through the physics in detail. I hope that failure is not enough to condemn me as not acting in good faith.
Absolutely not, and I think this occasioned a useful discussion. But if you have a physics or chemistry background, I for one would greatly appreciate it if you did so (and the Smalley critique, and perhaps Locklin below) and posted your take. Also you don't need to buy the book, you should be able to get a copy at any large university library.
Richard Smalley is the canonical white haired Nobel Laureate who disagrees strongly with the idea of MNT as Drexler outlines it.
I am no expert in the relevant science, but I take the Smalley argument from authority with a grain of salt, for two reasons.
First, according to wikipedia Smalley was a creationist, and apparently he endorsed an Intelligent Design book, saying the following:
Evolution has just been dealt its death blow. After reading Origins of Life with my background in chemistry and physics, it is clear that biological evolution could not have occurred.
If he underestimated the abi...
I will be happy to engage Drexler at length when I get the chance to do so. I have not, in the last 3 days, >>managed to buy the book and go through the physics in detail. I hope that failure is not enough to condemn me as not acting in good faith.
Absolutely not, and I think this occasioned a useful discussion. But if you have a physics or chemistry >background, I for one would greatly appreciate it if you did so (and the Smalley critique, and perhaps >Locklin below) and posted your take. Also you don't need to buy the book, you should be able to get a copy >at any large university library.
Okay. I'll try and do this. I'm mildly qualified; I'm finishing up a Ph.D. in computational materials science. It will take me a little while to make time for it, but it should be fun! Anyone else who is interested in seeing this discussion feel free to encourage me/let me know.
I would love to see a critique that started "On page W of X, Drexler proposes Y, but this won't work because Z". Smalley made up a proposal that Drexler didn't make ("fat fingers") and critiqued that. If there's a specific design in Nanosystems that won't work, that would be very informative.
This entire thread is about the PR implications. There's a reason I titled it "Is MNT putting out best foot forward" and not, "Is MNT true?"
I don't care about MNT. I do care about FAI. I regret deeply that this discussion has become focused on whether or not MNT is true, which is a subject I don't really care about, and has gotten away from, "Is MNT a good way to talk about FAI" which is a subject I care a lot about.
I don't consider Drexler's work to be "massive support" for MNT. I think that MNT is controversial. I think that one shouldn't introduce controversial material in a discussion unless you absolutely have to for some of the same reasons I think that Nixon being a Quaker and Republican is a bad example.
I honestly wasn't sure when I posted this whether anyone else here would feel the same way about MNT being non-obvious and controversial. It does seem safe to say that if MNT is controversial on LW, which is overwhelmingly sympathetic to transhumanist ideas, then it's probably even less popular outside of explicitly transhumanist communities.
I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess.
Your argument is extremely human-parochial. You seem to be thinking of AIs as potential supervillains who want to "rule the world," (where ruling the world = controlling humans.) If you think that an AI would care about controlling humans, you are assuming that the AI would be very human-like. In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.
In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans. So of course, to you manipulating humans seems vastly easier and more useful than building MNT or macro-sized robots, or whatever.
I'm commenting a few days after the main flurry of discussion and just wanted to raise a concern about how there seems to be a conflation in the OP and in many of the comments between (1) effective political advocacy among ignorant people who will stick with the results that fall out of the absurdity heuristic even when it gives false results and (2) truth seeking analysis based on detailed mechanistic considerations of how the world is likely to work.
Consider the 2x2 grid where, on one axis, we're working in either an epistemically unhygienic advocacy fra...
...It isn't clear to me than MNT is physically practical. I don't doubt that it can be done. I don't doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that's my day job, but I have a hard time believing the only reason you can't make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we're just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it's an enormously steep, enormously complicated en
Nothing like it? Map the atoms to individual pieces of legos, their configuration relative to each other (i.e. lining up the pegs and the holes) was intended to capture the directionality of covalent bonds. We capture forces and torques well since smaller legos tend to be easier to move, but harder to separate than larger legos. The shaking represents acting on the system via some therodynamic force. Gravity represents a tendency of things to settle into some local ground state that your shaking will have to push them away from. I think it does a pretty good job capturing some of the problems with entropy and exerted forces producing random thermal vibrations since those things are true at all length scales. The blindfold is because you aren't Laplace's demon, and you can't really measure individual chemical reactions while they're happening.
If anything, the lego system has too few degrees of freedom, and doesn't capture the massiveness of the problem you're dealing with because we can't imagine a mol of lego pieces.
I try not to just throw out analogies willy-nilly. I really think that the problem of MNT is the problem of keeping track of an enormous number of pieces and interacti...
I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details.
This is unreasonably accusatory. I'm pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded.
Isn't ...
Nature does nano-assembly, but it isn't arbitrary nano-assembly.
My example of a very hard nano-assembly problem is a ham sandwich, with the hardest part being the lettuce. It's possible that the easiest way to make a lettuce leaf-- they still have live cells-- is to grow a head of lettuce.
Maybe the right question (ignoring where MNT fits with AI) is to look at what parts of MNT looks feasible at present levels of knowledge.
This is unreasonably accusatory. I'm pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Pointing out a possible mental bias isn't accusatory.
I read that phrase as implying MNT was consciously added to help convince others about FAI, not that it was an unconscious bias eg Eliezer had.
This is precisely what I meant. In some examples the line of reasoning "AI->MNT->we're all dead if it's not friendly" is specifically prefaced with the discussion that any detailed example is inherently less plausible, but adding the details is supposed to make it feel more believable. My whole argument is that I think this specific detail will backfire in the "making it feel more believable" department for someone who does not already believe in MNT and other transhumanist memes.
I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible
No, MNT is part of the discussion because it is taken for granted, along with cryonics, parallel quantum worlds, Dyson spheres, and various less spectacular ideas. You may want to see analogous complaints that I have previously made.
It didn't help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.
Continuous in the sense of, like, continuous energy levels? Because if so, wow.
I don't have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like 'well, the machine won't be able to do anything on its own because it's just a computer - it'll need humanity, therefore, it'll never kill us all." Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn't guaranteed to happen, but it's also not unlikely, and it's a powerful educational tool for showing people the sorts of things that strong AI may be capable of.
The main problem here is that both you and the people you're complaining about, confuse early nanotechnology roaming free in the environment maybe as capable as a living cell and probably quite similar to them in other ways, with a much more advanced and energy intensive nanotechnology taking place in a controlled environment. This is further confounded by the AI likely being able to use large amounts of the quickly reproducing early nanotech plus existing infrastructure in order to construct said advanced nanotech in a few hours.
That is, if it doesn't find a way to program the nearest spinning harddrive to write down working femtotech within a millisecond.
We're pretty good at physics. The g-factor) for an electron is 2.0023193043622(15). That number is predicted by theory and measured experimentally, and both give that exact same result. The parentheses in the last 2-digits denote that we're not totally sure those last two numbers are a one and a five due to experimental error. There are very few other human endeavors where we have 12 or 13 decimal places worth of accuracy. While there's still a lot of interesting consequences to work out, and people are still working on getting quantum mechanics and general relativity to talk to each other, any new quantum physics is going to have to be hiding somewhere past the 15th decimal point.
Molecular nanotechnology, or MNT for those of you who love acronyms, seems to be a fairly common trope on LW and related literature. It's not really clear to me why. In many of the examples of "How could AI's help us" or "How could AI's rise to power" phrases like "cracks protein folding" or "making a block of diamond is just as easy as making a block of coal" are thrown about in ways that make me very very uncomfortable. Maybe it's all true, maybe I'm just late to the transhumanist party and the obviousness of this information was with my invitation that got lost in the mail, but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.
I must post the disclaimer that I have done a little bit of materials science, so maybe I'm just annoyed that you're making me obsolete, but I don't see why this particular possible future gets so much attention. Let us assume that a smarter than human AI will be very difficult to control and represents a large positive or negative utility for the entirety of the human race. Even given that assumption, it's still not clear to me that MNT is a likely element of the future. It isn't clear to me than MNT is physically practical. I don't doubt that it can be done. I don't doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that's my day job, but I have a hard time believing the only reason you can't make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we're just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it's an enormously steep, enormously complicated energy landscape, and the Schrodinger Equation scales very very poorly as you add additional particles and degrees of freedom. Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Maybe a super human intelligence is capable of doing so, but it's not at all clear to me that it's even possible.
I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details. I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible. This is precisely where I have difficulty. I would estimate the probability of molecular nanotechnology (in the form of programmable replicators, grey goo, and the like) as lower than the probability of human or super human level AI. I can think of all sorts of objection to the former, but very few objections to the latter. Including MNT as a consequence of AI, especially including it without addressing any of the fundamental difficulties of MNT, I would argue harms the credibility of AI researchers. It makes me nervous about sharing FAI literature with people I work with, and it continues to bother me.
I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess. I don't think convincing people that smarter than human AI's have enormous potential for good and evil is particularly difficult, once you can get them to concede that smarter than human AIs are possible. I do think that waving your hands and saying super-intelligence at things that may be physically impossible makes the whole endeavor seem less serious. If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.
Put in LW parlance, suggesting things not known to be possible by modern physics without detailed explanations puts you in the reference class "people on the internet who have their own ideas about physics". It didn't help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.
And maybe it's just me. Maybe this did not bother anyone else, and it's an incredible shortcut for getting people to realize just how different a future a greater than human intelligence makes possible and there is no better example. It does alarm me though, because I think that physicists and the kind of people who notice and get uncomfortable when you start invoking magic in your explanations may be the kind of people FAI is trying to attract.