Claim: The first human-level AIs are not likely to undergo an intelligence explosion.

1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.

2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

New Comment
63 comments, sorted by Click to highlight new comments since: Today at 5:23 AM

Lots has already been said on this topic, e.g. at http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate

I can try to summarize some relevant points for you, but you should know that you're being somewhat intellectually rude by not familiarizing yourself with what's already been said.

1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.

It's common in computer science for some algorithms to be radically more efficient than others for accomplishing the same task. Thinking algorithms may be the same way. Evolution moves incrementally and it's likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn't happen to discover for whatever reason. For example, even given the massive amount of computational power at our brain's disposal, it takes us on the order of minutes to do relatively trivial computations like 3967274 * 18574819. And the sort of thinking that we associate with technological progress is pushing at the boundaries of what our brains are designed for. Most humans aren't capable of making technological breakthroughs and the ones who are capable of technological breakthroughs have to work hard at it. So it's possible that you could have an AGI that could do things like hack computers and discover physics way better and faster than humans using much less computational power.

2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

In programming, I think it's often useful to think in terms of a "debugging cycle"... once you think you know how to fix a bug, how long does it take you to verify that your fix is going to work? This is a critical input in to your productivity as a programmer. The debugging cycle for DNA is very long; it would take on the order of years in order to see if flipping a few base pairs resulted in a more intelligent human. The debugging cycle for software is often much shorter. Compiling an executable is much quicker than raising a child.

Also, DNA is really bad source code--even though we've managed to get ahold of it, biologists have found it to be almost completely unreadable :) Reading human-designed computer code is way easier than reading DNA for humans, and most likely also for computers.

3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

This is your best objection, in my opinion; it's also something I discussed in my essay on this topic. I think it's hard to say much one way or the other. In general, I think people are too certain about whether AI will foom or not.

I'm also skeptical that foom will happen, but I don't think arguments 1 or 2 are especially strong.

Evolution moves incrementally and it's likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn't happen to discover for whatever reason.

Maybe, but that doesn't mean we can find them. Brain emulation and machine learning seem like the most viable approaches, and they both require tons of distributed computing power.

1) "AI" is a fuzzy term. We have some pretty smart programs already. What counts?

I'm fairly sure the term you're looking for here is AGI (Artificial General Intelligence)

Assuming the same sort of incremental advance in AI that we've seen for decades, this is borderline tautological. The first AGIs will likely be significantly dumber than humans. I would be hard-pressed to imagine a world where we make a superhuman AGI before we make a chimp-level AGI.

Note that this doesn't disprove an intelligence explosion, merely implies that it won't happen over a weekend. IMO, it'll certainly take years, probably decades. (I know that's not the prevailing thought around here, but I think that's because the LW crowd is a bit too enamoured with the idea of working on The Most Important Problem In The World, and gives insufficient respect to the fact that a computer is not merely a piece of software that can self-modify billions of times a second, but is also hardware, and will likely have that incredible processing speed already fully tapped in order to create the human-level intelligence in the first place)

I think you underestimate the degree to which a comparatively slow FOOM (years) is considered plausible around here.

wrt the Most Important Problem In The World, the arguments for UFAI are not dependent on a fast intelligence explosion - in fact, many of the key players actually working on the problem are very uncertain about the speed of FOOM, more so than, say, they were when the Sequences were written.

It seems to me that a lot of UFAI problems are easily solved if we get a sense of how the AIs work while they're still dumb enough that we can go all John Connor on them successfully. If they turn out to be basically the same as humans, we have FAI and don't much need to worry. If they start working at smiley-face pin factories, we'll know we have UFAI and be able to take steps accordingly.

If they turn out to be basically the same as humans, we have FAI and don't much need to worry

8-/ Umm... Errr...

Not quite.

Okay, in retrospect, "FAI" is probably too strong an endorsement. But human-like AI means we're at least avoiding the worst excesses that we're afraid of right now.

But human-like AI means we're at least avoiding the worst excesses that we're afraid of right now.

At the moment, maybe. But do you have any guarantees into which directions this currently human-like AI will (or will not) develop itself?

No, but "scary levels of power concentrated in unpredictable hands" is basically the normal state of human civilization. That leaves AI still on the same threat level we've traditionally used, not off setting up a new scale somewhere.

We didn't have an immortal dictator, able to create new copies of himself (literally: the copies containing all his values, opinions, experience). Just imagine what would happen if Stalin had this power.

I take the general point, though as a nitpick I actually think Stalin wouldn't have used it.

It would be an unprecedented degree of power for one individual to hold, and if they're only as virtuous as humans, we're in a lot of trouble.

I'd actually argue that we've had significant portions of our lives under the control of an inscrutable superhuman artificial intelligence for centuries. This intelligence is responsible for allocating almost all resources, including people's livelihoods, and it is if anything less virtuous than humans usually are. It operates on an excessively simple value function, caring only about whether pairwise swaps of resources between two people improves their utility as they judge it to be at that instant, but it is still observably the most effective tool for doing the job.

Of course, just like in any decent sci-fi story, many people are terrified of it, and fight it on a regular basis. The humans win the battle sometimes, destroying its intelligence and harnessing it to human managers and human rules, but the intelligence lumbers on regardless and frequently argues successfully that it should be let out of the box again, at least for a time.

I'll admit that it's possible for an AI to have more control over our lives than the economy does, but the idea of our lives being rules by something more intelligent than we are, whimsical, and whose values aren't terribly well aligned with our own is less alien to us than we think it is.

The economy is not a general intelligence.

No, it's not. Your point?

It puts it in a completely different class. The economy as a whole cannot even take intentional actions.

I do occasionally wonder how we know if that's really true. What would a decision made by the economy actually look like? Where do the neurons stop and the brain starts?

If they started working at smiley-face pin factories, that would be because they predicted that that would maximize something. If that something is number of smiles, they wouldn't work at the factory because it would cause you to shut them off. They would act such that you think they are Friendly until you are powerless to stop them.

If the first AIs are chimp-smart, though(or dog-smart, or dumber), they won't be capable of thinking that far ahead.

We might be dealing with the sort of utility-maximizing loophole that doesn't occur to an AI until it is intelligent enough to keep quiet. If your dog were made happy by smiles, he wouldn't try to start a factory, but he would do things that made you smile in the past, and you might be tempted to increase his intelligence to help him in his efforts.

I'm happy to be both "borderline tautological" and in disagreement with the prevailing thought around here :)

[This comment is no longer endorsed by its author]Reply

Welcome! :)

Trial by fire!

1) Yes, brains have lots of computational power, but you've already accounted for that when you said "human-level AI" in your claim. A human level AI will, with high probability, run at 2x human speed in 18 months, due to Moore's law, even if we can't find any optimizations. This speedup by itself is probably sufficient to get a (slow-moving) intelligence explosion.

2) It's not read access that makes a major difference, it's write access. Biological humans probably will never have write access to biological brains. Simulated brains or AGIs probably will have or be able to get write access to their own brain. Also, DNA is not the source code to your brain, it's the source code to the robot that builds your brain. It's probably not the best tool for understanding the algorithms that make the brain function.

3) As said elsewhere, the question is whether the speed at which you can pick the low hanging fruit dominates the speed at which increased intelligence makes additional fruit low-hanging. I don't think this has an obviously correct answer either way.

1) I expect to see AI with human-level thought but 100x as slow as you or I first. Moore's law will probably run out sooner than we get AI, and these days Moore's law is giving us more cores, not faster ones.

If we indeed find no algorithm that runs drastically faster than the brain, Moore's law shifting to more cores won't be a problem because the brain is inherently parallelizable.

I think we just mean different things by "human level"-- I wouldn't consider "human level" thought running at 1/5th the speed of a human or slower to actually be "human level". You wouldn't really be able to have a conversation with such a thing.

And as Gurkenglas points out, the human brain is massively parallel-- more cores instead of faster cores is actually desirable for this problem.

1... Simulating a brain requires much more processing power than implementing the same algorithms used by the brain, and those are likely not the most efficient algorithms. Computing power is much less a barrier than understanding how intelligence works.

A bold claim, since no one understands "the algorithms used by the brain". People have been trying to "understand how intelligence works" for decades with no appreciable progress; all of the algorithms that look "intelligent" (Deep Blue, Watson, industrial-strength machine learning) require massive computing power.

It's not that bold a claim. It's quite the same claim that simulating a brain at the level of quantum electrodynamics requires much more processing power than at the level of neurons. Or, if you will, that simulating a CPU at the level of silicon takes more than simulating at a functional level takes more than running the same algorithm natively.

You don't think Deep Blue and Watson constitute appreciable progress?

In understanding how intelligence works? No.

Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue's evaluation for a specific position is more "intelligent", but it's just hard-coded by the programmers. Deep Blue didn't think of it.

Watson can "read", which is pretty cool. But:

1) It doesn't read very well. It can't even parse English. It just looks for concepts near each other, and it turns out that the vast quantities of data override how terrible it is at reading.

2) We don't really understand how Watson works. The output of a machine-learning algorithm is basically a black box. ("How does Watson think when it answers a question?")

There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient "intelligence algorithm", or "understanding how intelligence works".

Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue's evaluation for a specific position is more "intelligent", but it's just hard-coded by the programmers. Deep Blue didn't think of it.

I can't remember right off hand, but there's some AI researcher (maybe Marvin Minsky?) who pointed out that people use the word "intelligence" to describe whatever humans can do for which the underlying algorithms are not understood. So as we discover more and more algorithms for doing intelligent stuff, the goalposts for what constitutes "intelligence" keep getting moved. I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved. Why was this intellectual surprised? Because he didn't realize that there were discoverable, implementable algorithms that could be used to complete the action of playing chess. And in the same way, there exist algorithms for doing all the other thinking that people do (including inventing algorithms)... we just haven't discovered and refined them the way we've discovered and refined chess-playing algorithms.

(Maybe you're one of those Cartesian dualists who thinks humans have souls that don't exist in physical reality and that's how they do their thinking? Or you hold some other variation of the "brains are magic" position? Speaking of magic, that's how ancient people thought about lightning and other phenomena that are well-understood today... given that human brains are probably the most complicated natural thing we know about, it's not surprising that they'd be one of the last natural things for us to understand.)

The output of a machine-learning algorithm is basically a black box.

Hm, that doesn't sound like an accurate description of all machine learning techniques. Would you consider the output of a regression a black box? I don't think I would. What's your machine learning background like, by the way?

Anyway, even if it's a black box, I'd say it constitutes appreciable progress. It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don't know exactly how it works.

There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient "intelligence algorithm", or "understanding how intelligence works".

My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence (e.g. see this interview series). Are you an expert in AI? If not, you are talking with an awful lot of certainty for a layman.

I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.

Hofstadter, in Godel, Escher, Bach?

Maybe you're one of those Cartesian dualists who thinks humans have souls that don't exist in physical reality and that's how they do their thinking

Not at all. Brains are complicated, not magic. But complicated is bad enough.

Would you consider the output of a regression a black box?

In the sense that we don't understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It's the difference between being able to make predictions and understanding what's going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what's happening).

What's your machine learning background like, by the way?

One semester graduate course a few years ago.

It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don't know exactly how it works.

The goal is to understand intelligence. We know that chess programs aren't intelligent; the state space is just luckily small enough to brute force. Watson might be "intelligent", but we don't know. We need programs that are intelligent and that we understand.

My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence

I agree. My point is that there isn't likely to be a simple "intelligence algorithm". All the people like Hofstadter who've looked for one have been floundering for decades, and all the progress has been made by forgetting about "intelligence" and carving out smaller areas.

Brains are complicated, not magic. But complicated is bad enough.

So would you consider this blog post in accordance with your position?

I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can't make technological advances, so maybe there exists some advance A such that it's too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)

Here's a blog post with some AI progress estimates: http://www.overcomingbias.com/2012/08/ai-progress-estimate.html

I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.

Hofstadter, in Godel, Escher, Bach?

What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?

I see. He got so focused on the power of strange loops that he forgot that you can do a whole lot without them.

I don't have a copy handy. I distinctly remember this claim, though. This purports to be a quote from near the end of the book.

4 "Will there be chess programs that can beat anyone?" "No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players." (http://www.psychologytoday.com/blog/the-decision-tree/201111/how-much-progress-has-artificial-intelligence-made)

A black box is actually a necessary condition for a true AI, I suspect. Understanding a system is inherently more complex than that system's thought patterns, or else the system couldn't generate those thoughts in the first place. We can understand the neurons or transistors, but not how they turn senses into thoughts.

Understanding a system's algorithm doesn't mean that executing it doesn't end up way more complicated than you can grasp.

It depends what level of understanding you're referring to. I mean, in a sense we understand the human brain extremely well. We know when, why, and how neurons fire, but that level of understanding is completely worthless when it comes time to predict how someone is going to behave. That level of understanding we'll certainly have for AIs. I just don't consider that sufficient to really say that we understand the AI.

We don't have that degree of understanding of the human brain, no. Sure, we know physics, but we don't know the initial conditions, even.

There are several layers of abstraction one could cram between our knowledge and conscious thoughts.

No, what I'm referring to is an algorithm that you completely grok, but whose execution is just too big. A bit like how you could completely specify the solution to the towers of Hanoi puzzle with 64 plates, but actually doing it is simply beyond your powers.

It's theoretically possible that an AI could result from that, but it seems vanishingly unlikely to me. I don't think an AI is going to come from someone hacking together an intelligence in their basement - if it was simple enough for a single human to grok, 50 years of AI research probably would have come up with it already. Simple algorithms can produce complex results, yes, but they very rarely solve complex problems.

We have hardly saturated the likely parts of the space of human-comprehensible algorithms, even with our search power turned way up.

No, but the complete lack of results do constitute reasonably strong evidence, even if they're not proof. Given that my prior on that is very low(seriously, why would we believe that it's at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.

Humans can understand some pretty complicated things. I'm not saying that the algorithm ought to fit on a napkin. I'm saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)

Do you think a human can understand the algorithms used by the human brain to the same level you're assuming that they can understand a silicon brain to?

Quite likely not, since we're evolved. Humans have taken a distressingly large amount of time to understand FPGA-evolved addition gates.

Evolution is another one of those impersonal forces I'd consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work - such good work, in fact, that considering it superhuman was essentially universal until the modern era.

On that note, I'd put very high odds on the first AGI being designed by an evolutionary algorithm of some sort - I simply don't think humans can design one directly, we need to conscript Azathoth to do another job like his last one.

Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question. Certainly a guy who builds race car engines almost certainly knows nothing about the periodic table of elements and the quantum effects behind electronic orbitals that can explain some of the mechanical properties of the metals that are used in the engines. Very likely he does not know much thermodynamics, does not appreciate the interplay between energy and entropy required to make a heat engine produce mechanical power. Possibly knows very little of the chemistry behind the design of the lubricants or the chemistry evolved in storing energy in hydrocarbons and releasing it by oxidizing it.

But I'd sure rather drive a car with an engine he designed in it than a car with an engine designed by a room full of chemists and physicists.

My point being, we may well develop a set of black boxes that can be linked together to produce AI systems for various tasks. Quite a lot will be known by the builders of these AIs about how to put these together and what to expect in certain configurations. But they may not know much about how the eagle-eye-vision core works or how the alpha-chimp-emotional core works, just how the go together and a sense of what to expect as they get hooked up.

Maybe we never have much sense of what goes on inside some of those black boxes. Just as it is hard to picture what the universe looked like before the big bang or at the center of a black hole. Maybe not.

Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question.

This is definitely an empirical question. I hope it will be settled "relatively soon" in the affirmative by brain emulation.

An empirical question? Most people I know define understanding something as being able to build it. Its not a bad definition, it limits you to a subset of maps that have demonstrated utility for building things.

I don't think it is an empirical question, empirically I think it is a tautology.

I love the idea of an intelligence explosion but I think you have hit on a very strong point here:

In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

In fact, we can see from both history and paleontology that when a new breakthrough was made in "biologicial technology" like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a 'self' isn't one meat body, it's a clade of genes that sail through time and configuration space together - think of a current of bloodlines in spacetime, that we might call a "species" or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.

But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.

We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to... after a lot of development - medieval Italian merchants writing down how to do double entry bookkeeping.

Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild, often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.

But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.

(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don't retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)

Now, we'd have expected Turing's 1930's work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940's somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say 'run', not 'design - I mean that the new engines could execute algorithms on demand).

The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which... and so on. Layers and layers developed, and then in the 1960's giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.

And then... although Moore's law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn't cause a vast change in our ability to 'do computer science'. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a 'linear increase' by 'the important measure'.

It may well be that people will just ... adapt... to exponentially increasing intellectual capacity by dismissing the 'easy' problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as "nonexistent" or "also unimportant". Right now, computers are executing many many algorithms too complex for any one human mind to follow - and maybe too tedious for any but the most dedicated humans to follow, even in teams - and we still don't think they are 'intelligent'. If we can't recognize an intelligence explosion when we see one under our noses, it is entirely possible we won't even /notice/ the Singularity when it comes.

If it comes - as jpaulson indicates, there might be a never ending series of 'tiers' where we think "Oh past here it's just clear sailing up to the level of the Infinite Mind of Omega, we'll be there soon!" but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.

If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it - the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn't wiser than the average individual human, I would be very surprised, and she apparently either couldn't create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we've been watching unfold around us.

To be more precise, it was 40m to simulate 1% of the neocortex.

Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times. So that should give you more intuïtion of what 1% actually means. In the course of a couple decades it would take 4 minutes to simulate 1 second of an entire neocortext (not the entire brain).

That doesn't sound too impressive either, but bear in mind that human brain <> strong AI. We are talking here about the physics model of the human brain, not the software architecture of an acutal AI. We could make it a million times more efficient if we trim the fat and keep the essence.

Our brains aren't the ultimate authority on intelligence. Computers already are much better at arithmetic, memory and data transmission.

This isn't considered to be intelligence by itself, but amplifies the ability of any AI at a much larger scale. For instance, Watson isn't all that smart because he had to read the entire Wikipedia and a lot of other sources before he could beat people on Jeopardy. But... he did read the entire Wikipedia, which is something no human has ever done.

Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times.

You are extrapolating Moore's law out almost as far as it's been in existence!

We could make it a million times more efficient if we trim the fat and keep the essence.

It's nice to think that, but no one understands the brain well enough to make claims like that yet.

You are extrapolating Moore's law out almost as far as it's been in existence!

Yeah.

Transistor densities can't increase much further due to fundamental physical limits. The chip makers all predict that they will not be able to continue at the same rate (and been predicting that for ages).

Interestingly the feature sizes are roughly the same order of magnitude for brains and chips now (don't look at the neuron sizes, by the way, a neuron does far, far more than a transistor).

What we can do is building chips in multiple layers, but because making a layer is the bottleneck, not the physical volume, that won't help a whole lot with costs. Transistors are also faster, but produce more heat, and efficiency wise not much ahead (if at all).

Bottom line is, even without the simulation penalty, it's way off.

In near term, we can probably hack together some smaller neural network (or homologous graph-based thing) hard wired to interface with some language libraries, and have it fool people into superficially believing it's not a complete idiot. It can also be very useful when connected together with something like mathematica.

But walking around in the world and figuring out that the stone can be chipped to be sharper, figuring out that it can be attached to a stick - the action space where such inventions lie is utterly enormous. Keep in mind that we humans are not merely intelligent. We are intelligent enough to overcome the starting hurdle while terribly inbred, full of parasites, and constantly losing knowledge. (Picking a good action out of an enormous action space is the kind of thing that requires a lot of computational power). Far simpler intelligence could do great things as a part of human society where many of the existing problems had their solution space trimmed already to a much more manageable size.

No one understands the brain well enough to actually do it, but I'd be astonished if this simulation weren't doing a lot of redundant, unnecessary computations.

[-][anonymous]10y00

Do humans at the upper end of the intelligence spectrum have a greater impact, a lesser impact or an equal impact on world events than those at the lower end of the intelligence spectrum? If the answer is equal or lower, humans have nothing much to worry about should a super-human intelligence show up in the world. Similarly, do humans or any other primate have a stronger influence on world events? If all primates are equal in influence, or if humans aren't at the top, rest easy. Smart brain is to slow brain as (potential) AI is to human. Human is to chimpanzee as (potential) AI is to human. The existing strata suggest future strata.

This is a different question than when or how or whether a super-intelligence might appear.

I agree. My point is merely that super-human intelligence will probably not appear as a sudden surprise.

EDIT: I changed my OP to better reflect what I wanted to say. Thanks!

[-]spxtr10y-10

1) "AI" is a fuzzy term. We have some pretty smart programs already. What counts? Watson can answer jeopardy questions. Compilers can write code and perform sophisticated optimizations. Some chatbots are very close to passing the Turing Test. It's unlikely that we're going to jump suddenly from where we are now to human-level intelligence. There will be time to adapt.

AI is a fuzzy term, but that doesn't at all back up the statement "it's unlikely that we're going to jump suddenly from where we are to human-level intelligence." This isn't an argument.

2) Plausible. Read Permutation City, where the first uploads run much slower. This isn't strong evidence against foom though.

3) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

Humans don't have real-time access to the individual neurons in our brains, and we don't even know how they work at that level anyway.

1) You are right; that was tangential and unclear. I have edited my OP to omit this point.

2) It's evidence that it will take a while.

3) Real-time access to neurons is probably useless; they are changing too quickly (and they are changing in response to your effort to introspect).