Modest Superintelligences

by Wei_Dai1 min read22nd Mar 2012100 comments

34

Mind UploadingFuturismGeneral Intelligence
Personal Blog

I'm skeptical about trying to build FAI, but not about trying to influence the Singularity in a positive direction. Some people may be skeptical even of the latter because they don't think the possibility of an intelligence explosion is a very likely one. I suggest that even if intelligence explosion turns out to be impossible, we can still reach a positive Singularity by building what I'll call "modest superintelligences", that is, superintelligent entities, capable of taking over the universe and preventing existential risks and Malthusian outcomes, whose construction does not require fast recursive self-improvement or other questionable assumptions about the nature of intelligence. This helps to establish a lower bound on the benefits of an organization that aims to strategically influence the outcome of the Singularity.

  • MSI-1: 105 biologically cloned humans of von Neumann-level intelligence, highly educated and indoctrinated from birth to work collaboratively towards some goal, such as building MSI-2 (or equivalent)
  • MSI-2: 1010 whole brain emulations of von Neumann, each running at ten times human speed, with WBE-enabled institutional controls that increase group coherence/rationality (or equivalent)
  • MSI-3: 1020 copies of von Neumann WBE, each running at a thousand times human speed, with more advanced (to be invented) institutional controls and collaboration tools (or equivalent)

(To recall what the actual von Neumann, who we might call MSI-0, accomplished, open his Wikipedia page and scroll through the "known for" sidebar.)

Building a MSI-1 seems to require a total cost on the order of $100 billion (assuming $10 million for each clone), which is comparable to the Apollo project, and about 0.25% of the annual Gross World Product. (For further comparison, note that Apple has a market capitalization of $561 billion, and annual profit of $25 billion.) In exchange for that cost, any nation that undertakes the project has a reasonable chance of obtaining an insurmountable lead in whatever technologies end up driving the Singularity, and with that a large measure of control over its outcome. If no better strategic options come along, lobbying a government to build MSI-1 and/or influencing its design and aims seems to be the least that a Singularitarian organization could do.

34

100 comments, sorted by Highlighting new comments since Today at 10:44 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

highly educated and indoctrinated from birth to work collaboratively towards some goal

Doing this very reliably seems more fantastical than the intelligence enhancement part.

7Wei_Dai9yDo we need "very reliably"? If not, feeding them Eliezer's Sequences at a young age might work well enough.
3John_Maxwell9yI'm inclined to think not. IMHO it's better to work on an easy plan that probably won't backfire than a very hard plan that almost certainly won't backfire given dangers outside of the planners' control... Beware the young male rebel effect...

Where do you get your numbers from? Why aren't [big number] of educated people a superintelligence now? If it's due to coordination problems, then you are sweeping the complexity of solving such problems under the rug.

7bogdanb9yLet's think of examples of groups of ten thousand genius-level people working together towards a common narowly-defined goal. Wikipedia claims that the LHC “was built in collaboration with over 10,000 scientists and engineers from over 100 countries, as well as hundreds of universities and laboratories”. I doubt they were all von Neumann level, and I imagine most of them weren’t working exclusively on the LCH. And no matter how nice scientists and engineers are, the group probably didn’t cooperate as well the one Wei proposed. (Although diversity probably does count for something.) Other groups of similar size I can think of are NASA, IBM, Google and Microsoft. (Though, like the LHC, I don’t think they’re hiring only von Neumann level geniuses. Probably many multinational companies would exceed the size but would be even further from the genius requirements.) But they don’t quite work in a single direction (NASA has many missions, Google and Microsoft have many products). That said, I wouldn’t object strongly to calling such groups weakly superintelligent. Building stuff like the LHC or the Apollo program in ten years is so vastly beyond the ability of a single man that I don’t quite classify an entity that can do it as a “human-level intelligence”, even though it is assembled from humans. (Also, I could see a group like this building an MSI-2, though it’d take more than ten years if starting now.)
3adamisom9yThe "Working in a single direction" part seems hard: are you so single-minded? I know I'm not.
1bogdanb9yNo, I’m not, but I’m not nearly von Neumann’s genius level either, and I wasn’t educated and indoctrinated from birth for that purpose. And there certainly are people who are that single-minded, we’ll “just” have to figure out which parts of nature or nurture cause it. Even drugs slightly more advanced that the stuff used now for ADHD might be useful. Even with “normal” geniuses, I’d bet a group would gain a lot of focus even from “mundane” changes like not having to worry about office politics, finding grants, or assembling your children’s college fund, or going to college and finding a promising career for younger geniuses. I’m not saying this kind of changes are easier to achieve in practice, just that they’re very low-tech; you don’t need lots of research to try them, just a big (monetary and political) budget.
1jmmcd9yAll these can be viewed as wasting time, but don't forget that they are important parts of the motivation framework -- promotion, recognition, monetary reward, etc -- that people operate in. Motivation is an important factor in productivity. If we remove (eg) the competitive grant system, will researchers slack off?
2bogdanb9yI’ll bet some would and some wouldn’t. See Einstein and how he was working on relativity (at least at the beginning). If this trait is genetically detectable, it would presumably be a selection criteria for MSI-1. If it is based on nurture, presumably the necessary conditions would be part of the indoctrination for MSI-1. Finally, if it cannot be anticipated, presumably MSI-1 would use post-facto selection (i.e., raise and train more than 10k candidates, keep those that “work” and let the others do other stuff.) Also, there are likely other motivational elements that would work in a MSI-1 (at least in my view, the selection and training and indoctrination implicit in the OP’s suggestion would be very different from any group I am aware of in history). And stuff like peer recognition and the satisfaction of a job well done are huge motivators in some cultures. Also, remember we’re seeing this with culture-tinted glasses: In the west pretty much everyone is focused on a carreer, family and the like; the few who aren’t are seen as “slackers”, “hippies”, “weirdos” etc. Even if not subscribing to that cultural subtext rationally, it’s hard to prevent the unconscious associations of “no care for money=status” => “no motivation”.
0adamisom9yMoreover, and this was part of my idea, I think there may be something to the idea behind structured procrastination (.com). Which is to say I don't really know. What I do know is that I'm not very single-minded and there is evidence it is not a common trait.
6Wei_Dai9yPart of the reason is due to coordination problems, which I think would be reduced if the group consisted of clones of a single person with similar education and upbringing, and hence similar values/goals. Another part of the reason is that we simply don't have that many von Neumanns today. The [big number] of educated people that you see in the world consist almost entirely of people who are much less intelligent compared to von Neumann.

Not only are there more people today than in von Neumann's time, but it is far easier to be discovered or to educate yourself. The general prosperity level of the world is also far higher. As a result, I expect, purely on statistical grounds, that there would be far more von Neumann level people today than in von Neumann's time. I certainly don't see a shortage of brilliant people in academia, for instance.

What is a test for a von Neumann level intelligence? Do you think "top people" in technical fields today would fail?

5Wei_Dai9yMy intuition says that if we took the 10000 most intelligent people in the world, put them together and told them to work on some technical project, that would be much less effective than if we could make 10000 copies of the most intelligent person, in part because the 10000th most intelligent person is much less productive than the 1st. As evidence for this, I note that there are very few people whose "known for" list on Wikipedia is nearly as long as von Neumann's, and you'd expect more such people if the productivity difference between the 1st and the 10000th weren't very large. But if it turns out that I'm wrong, and it's not worth doing the cloning step, then I'd be happy with a "MSI-0.9" that just gathers 10000 top people and sets them to work on MSI-2 (or whatever technologies appears most important to getting a positive Singularity).
3John_Maxwell9yhttp://en.wikipedia.org/wiki/List_of_things_named_after_Leonhard_Euler [http://en.wikipedia.org/wiki/List_of_things_named_after_Leonhard_Euler] "Mathematical historian Eric Temple Bell estimated that, had Gauss published all of his discoveries in a timely manner, he would have advanced mathematics by fifty years"; http://en.wikipedia.org/wiki/List_of_things_named_after_Carl_Friedrich_Gauss [http://en.wikipedia.org/wiki/List_of_things_named_after_Carl_Friedrich_Gauss] http://en.wikipedia.org/wiki/Category:Lists_of_things_named_after_mathematicians [http://en.wikipedia.org/wiki/Category:Lists_of_things_named_after_mathematicians] (This isn't to contradict your point, just provide relevant evidence.)
3IlyaShpitser9yI agree that von Neumann was exceptional. I am not sure a Wikipedia rap sheet is as good a proxy for genius as you claim. I think genius is necessary but not sufficient. I also think "recreating von Neumann" will require context not present in his DNA. There are also issues with parallelizing intellectual work detailed in "the mythical man month," I am sure you are aware of. At any rate, instead of trying for MSI-1, which has huge technical obstacles to overcome, why not simply push to acquire financial resources and hire brilliant people to do the work you think is necessary. That is doable with today's tech, and today's people. [comment from the heart, rather than from the head: your description of MSI-1 sounds kind of, well, totalitarian. Don't you think that's a little peculiar?]
8Wei_Dai9yThe point is to obtain an insurmountable lead on WBE tech, otherwise you'll just spur competition and probably end up with Robin Hanson's Malthusian scenario. (If intelligence explosion were possible, you could win the WBE race by a small margin and translate that into a big win, but for this post I'm assuming that intelligence explosion isn't possible, so you need to win the race by a large margin.) In that case you're in for a surprise when you find out what I was referring to by "WBE-enabled institutional controls" for MSI-2. Read Carl Shulman's Whole Brain Emulation and the Evolution of Superorganisms [http://lesswrong.com/lw/5jb/link_whole_brain_emulation_and_the_evolution_of/].
2Vladimir_Nesov9ySince exploiting intelligence explosion still requires FAI, and FAI could be very difficult, you might still need a large enough margin to perform all the necessary FAI research before your competition stumbles on an AGI.
1John_Maxwell9yI thought of an interesting objection to this. What if the cloned agents decided that the gap between themselves and other humans was sufficiently well-defined for them to implement the coherent extrapolated volition of the clones themselves only? http://lesswrong.com/lw/932/stupid_questions_open_thread/64r4 [http://lesswrong.com/lw/932/stupid_questions_open_thread/64r4] Of course, this problem could potentially arise even if the gap was poorly defined...
1Jayson_Virissimo9yThat isn't necessarily an objection. Personally, I'm unsure if I would prefer human-CEV to Johnny-CEV.
2Alex_Altair9yAgreed. I don't know much about von Neumann, but I would trust Feynman with my CEV any day.
1faul_sname9yThey are. Many collections of individuals (e.g. tech companies, hedge funds, PACs, etc.) seem to do rather a lot more than an individual human could. Likewise, humanity as a whole could be classified as a superintelligence (and possibly a recursively self-improving one: see the Flynn effect [http://en.wikipedia.org/wiki/Flynn_effect]). The idea is not that large numbers of intelligent people aren't a superintelligence, it's that 10000 von Neumanns would be a more powerful superintelligence than most groups of highly intelligent people.
2Grognor9yDownvoted for using terms imprecisely; see The Virtue of Narrowness [http://lesswrong.com/lw/ic/the_virtue_of_narrowness/]. Superintelligences are not "any powerful entity"; humanity is not "recursively self-improving". This conversation was over some time in 2009 when Eliezer finally got Tim Tyler to stop applying those terms to things that already exist, as though that meant anything.
9faul_sname9yInsofar as I have seen it defined here, an intelligence is that which produces optimization given a certain amount of resources, and higher intelligences exert more optimization power that lower intelligence given the same starting conditions. Since many organizations, especially tech companies, do rather a lot of optimizing given their resources. Apple, a company of 60000 employees, made profits of 30 billion last year. Apple, effectively a profit maximizer, is doing rather more than 60000 independent individuals would (they're making $500000/employee/year in profits). Considering that they are doing a lot of optimization given their starting conditions, I would say that they are at least a weakly superhuman intelligence. Humanity is working to improve its own intelligence, and succeeding. So we have the "self-improving" right there. As we get smarter/more able, we are finding new and interesting ways to improve. Hence, "recursively". Evidently, "self improving in such a way that the entity can find new ways to self improve" isn't "recursive self improvement". I really don't know what the term would mean, and would appreciate if someone would enlighten me.
2Nominull9yIt is possible for the Wise Master to be mistaken, you know. He doesn't articulate in that article his reasons for drawing lines where he does, he just says "don't get me started". That makes it not a great article to cite in support of those lines, since it means you are basically just appealing to his authority, rather than referencing his arguments.

What is the current bottleneck on MS-1? Are we better off raiding Neumann's corpse, extracting the DNA and then implanting all the embryos we can make? Or are we better off with the current strategies of sequencing intelligent people to uncover the genetics of intelligence, which would then allow embryo selection or engineering? With the latter, the main bottleneck seems to be the cost of sequencing (since one needs a lot of genomes to discern the signal through all the noise), but that cost is being pushed down by the free market at a breathtaking pace - and indeed, the Beijing Genomics Institute (see Hsu, IIRC) is already working hard on the task of sequencing smart kids.

What is the current bottleneck on MS-1? Are we better off raiding Neumann's corpse, extracting the DNA and then implanting all the embryos we can make?

We can't clone humans at the moment. Even attempts to derive human stem cell lines from cloning have been disappointing, and reproductive cloning would face much higher barriers. Even if it could be made to work Dolly-style, you would still be producing huge numbers of miscarriages, early deaths, and damaged offspring for each success. That would not only increase the economic cost, but be incredibly unattractive for parents and a PR nightmare.

Or are we better off with the current strategies of sequencing intelligent people to uncover the genetics of intelligence, which would then allow embryo selection or engineering?

We can do embryo selection, but the relevant alleles would need to be identified in large studies (with the effectiveness of selection scaling with the portion of variation explained). The BGI study may expose a number of candidates, but I would expect the majority to be captured through linking genetic data collected for other reasons (or as part of comprehensive biobanks) to be matched to military or education... (read more)

6gwern9yYes, this is what I was alluding to: of the 2 obvious routes to reproducing von Neumann levels of intelligence by playing god with genetics, the first one, the one OP seems to be suggesting, is abhorrent and troublesome. The second one seems straightforward and supported by the current state of understanding - but doesn't require the lobbying etc. (as OP proposed) as it's effectively already being done.
-2Jayson_Virissimo9yBTW, does anyone know of the...status...of said corpse? 'Tis but a purely academic curiosity, I assure you.
1pedanterrific9yPrinceton [http://www.findagrave.com/cgi-bin/fg.cgi?page=gr&GRid=7333144].

If intelligence is 50% genetic, and Von Newman was 1 in a billion, the clones will be 1 in 500. Regression to the mean.

7Wei_Dai2yThat's a really interesting point. I'm not sure how to do the math myself, so I wonder if anyone can help verify this. Also, is this assuming a population-average upbringing/education? What if we give the clones the best upbringing and education that money can buy? (I assumed a budget of $10 million per clone in the OP.)
3paulfchristiano2yFor a normally distributed property 1/billion is +6 sigma, while +3 sigma is 1/750. If a property is normally distributed, the clones share 50% of the variation, and von neumann is 1 in a billion, then I think it's right that our median guess for the median clone should be the 1 in 750 level. (But of your 100,000 clones several of them will be at the one in a trillion level, a hundred will be more extreme than von neumann, and >20,000 of them will be one in 20,000. I'm generally not sure what you are supposed to infer from the "one in X" metric. [Edited to add: all of those are the fractions in expectation, and they are significant underestimates because they ignore the uncertainty in the genetic component.])
4Unnamed2yThe component should have a smaller standard deviation, though. If A and B each have stdev=1 & are independent then A+B has stdev=sqrt(2). I think that means that we'd expect someone who is +6 sigma on A+B to be about +3*sqrt(2) sigma on A in the median case. That's +4.24 sigma, or 1 in 90,000.
4paulfchristiano2yThey are +4.2SD on the genetic component of the property (= 1 in 90,000), but the median person with those genetics is still only +3SD on the overall property (= 1 in 750), right? (That is, the expected boost from the abnormally extreme genetics should be the same as the expected boost from the abnormally extreme environment, if the two are equally important. So each of them should be half of the total effect, i.e. 3SD on the overall trait.)
2Unnamed2yOh, you're right. With A & B iid normal variables, if you take someone who is 1 in a billion at A+B, then in the median case they will be 1 in 90,000 at A. Then if you take someone who is 1 in 90,000 at A and give them the median level of B, they will be 1 in 750 at A+B. (You can get to rarer levels by reintroducing some of the variation rather than taking the median case twice.)
2Unnamed2y500 seems too small. If someone is 1 in 30,000 on A and 1 in 30,000 on B, then about 1 in a billion will be at least as extreme as them on both A and B. That's not exactly the number that we're looking for but it seems like it should give the right order of magnitude (30,000 rather than 500). And it seems like the answer we're looking for should be larger than 30,000, since people who are more extreme than them on A+B includes everyone who is more extreme than them on both A and B, plus some people who are more extreme on only either A or B. That would make extreme scores on A+B more common, so we need a larger number than 30,000 to keep it as rare as 1 in a billion.
1riceissa2yI might be totally mistaken here, but the calculation done by Donald Hobson and Paul seems to assume von Neumann's genes are sampled randomly from a population with mean IQ 100. But given that von Neumann is Jewish (and possibly came from a family of particularly smart Hungarian Jews; I haven't looked into this), we should be assuming that the genetic component is sampled from a distribution with higher mean IQ. Using breeder's equation [https://jaymans.wordpress.com/2015/10/21/regression-to-the-mean/] with a higher family mean IQ gives a more optimistic estimate for the clones' IQ.
2Wei_Dai2yI should have questioned this as well. According to Wikipedia [https://en.wikipedia.org/wiki/Heritability_of_IQ]:
2habryka2yHeritability != genetic components!
2Wei_Dai2yI think Donald Hobson was probably using "50% genetic" to mean "50% heritable", otherwise I don't see what "50% genetic" could mean. If I'm being confused here, can you please explain more?
4habryka2yMost estimates for heritability would still be significant even in a genetically identical population (since cultural factors are heritable due to shared family environments). You can try to control for his with twin adoption studies, which controls for shared family environment, but still leaves a lot of different aspects of the environment the same. You could also adjust for all other kinds of things and so get closer to something like the ”real effect of genes”. I am not fully sure what Donald Hobson meant by “effect of genes” but more generally heritability is an upper bound on the effect of genes on individuals, and we should expect the real effect to be lower (how much lower is a debate with lots of complicated philosophical arguments and people being confused about how causality works). From Wikipedia:

I'm skeptical about trying to build FAI

Can you expand your reasons?

Here are some posts/threads where I talk about my reasons: 1 2 3.

While the benefits are clear, it is not so clear that the project would in fact outrun the pace of progress as usual.

Cloning: It is unclear to which extent the truly exceptional ability is a result of just being lucky that the random parts of the development process resulted in right kind of circuitry. I'm not even taking of nature vs nurture. Those clones won't have same fingerprints, won't have same minor blood vessel patterns, etc etc. even if the wombs were exactly identical, as long as the thermal noise differs. See also : http://www.ncbi.nlm.nih.gov/... (read more)

0CarlShulman9yIn twin studies this is partitioned out as "non-shared environment." In the central range it explains less variation than genetics, but more than household-level environment.
[-][anonymous]6y 0

Do we have reason to believe the average period research engineering couldn't do what Von Neumann did given the same materials and information?

How does more computational power help become more rational? Will this not simply increase the number of irrational decisions made within the group?

There are two kinds of people in the world: Johnny von Neumann and the rest of us.

-Eugene Wigner

If we did reach MSI-3, then the second conjunct of this statement would become redundant.