Recently I stumbled upon Richard Carrier's essay "Are We Doomed" (June 5, 2009), when asked to comment about the Singularity, said the following:

I agree the Singularity stuff is often muddled nonsense. I just don't know many advocates of it. Those who do advocate it are often unrealistic about the physical limits of technology, and particularly the nature of IQ. They base their "predictions" on two implausible assumptions: that advancement of IQ is potentially unlimited (I am fairly certain it will be bounded by complexity theory: at a certain point it just won't be possible to think any faster or sounder or more creatively) and that high IQ is predictive of accelerating technological advancement. History proves otherwise: even people ten times smarter than people like me produce no more extensive or revolutionary technological or scientific output, much less invent more technologies or make more discoveries--in fact, by some accounts they often produce less in those regards than people of more modest (though still high) intelligence.

However, Singularity fans are right about two things: machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years (if advocates predict this will happen sooner, then they are being unrealistic), and the pace of technological advancement will accelerate. However, this is already accounted for by existing models of technological advancement, e.g. Moore's Law holds that computers double in processing power every three years, Haik's Law holds that LED's double in efficiency every three years, and so on (similar laws probably hold for other technologies, these are just two that have been proven so far). Thus, that technological progress accelerates is already predicted. The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

It therefore doesn't predict anything remarkable, and certainly doesn't deserve such a pretentious name. Because there will be a limit, an end point, and it won't resemble a Singularity: there is a physical limit on how fast thoughts can be thunk and how fast manufacturing can occur, quantum mechanical limits that can never be overcome, by any technology. Once we reach that point, the pace of technological advancement will cease to be geometric and will become linear, or in some cases stop altogether. For instance, once we reach the quantum mechanical limit of computational speed and component size, no further advances will be possible in terms of Moore's Law (even Kurzweil's theory that it will continue in the form of expansion in size ignores the fact that we can already do this now, yet we don't see moon-sized computers anywhere--a fact that reveals an importantly overlooked reality: what things cost).

Ironically, the same has been discovered about actual singularities: they, too, don't really exist, and for the same quantum mechanical reasons (see my discussion here).

What do you think?

 

New to LessWrong?

New Comment
51 comments, sorted by Click to highlight new comments since: Today at 2:55 AM

Let's not jump down his throat. It's a current evaluation from shallow research, not an expert-level essay.

I will proceed to jump down his throat.

vague claims about technology, IQ having a fundamental bound, and IQ sucking as a metric anyway

That's rather too vague to analyze.

If being really smart won't help you (on real-life instances, not just asymptotically) because you're jumping up the hierarchy, there's still a lot to get from improving heuristics, looking into increasingly specialised heuristics, and just throwing more power at the problem. But we don't have a model detailed enough to provide a bound at all!

Singularity fans are right about two things: machines will outthink humans within fifty to a hundred years, and the pace of technological advancement will accelerate.

Okay, either he's agreeing with Singularitarians but doesn't want to admit it, or he expect tech will run into a wall really fast for no specified reason.

this is already accounted for by existing models of technological advancement, e.g. Moore's Law

...nobody is denying that surface laws like these exist. Singularitarians are claiming that there are deeper reasons why these models are and stay true. Next he's going to tell us that Newton's laws are useless because we already have a parabolic model of freefall.

The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

Ehn, two schools out of three ain't bad.

It therefore doesn't predict anything remarkable

If creating the smartest thing in the universe is unremarkable, I want to see what impresses Carrier.

certainly doesn't deserve such a pretentious name

I have to back him on that one.

there will be a limit, an end point

What is wrong with people that makes them understand "a bound exists" as "the bound is smallish"?

we can already do this now, yet we don't see moon-sized computers anywhere--a fact that reveals an importantly overlooked reality: what things cost

...yes, we don't see moon-sized computers because they're more expensive for the same performance gain than reducing and speeding up individual components. When those avenues are exhausted, it will become much more economically viable to make huge computers.

[-][anonymous]12y290

.

What is wrong with people that makes them understand "a bound exists" as "the bound is smallish"?

Modus tollens: "no small bound exists" --> "no bound exists", e.g. life extension is immortality, (but immortality is physically impossible, so too must be life extension).

[-]Nisan12y150

Hopefully, someday soon, all intellectuals will agree that

  • nonhuman things will greatly outperform biological human intelligences in all domains;
  • there is a significant chance that all important decisions until the end of time will be made by nonhuman intelligences;
  • these things will happen within decades or centuries, unless civilization collapses; and
  • that "Singularity" idea, whatever it was, was all wrong.

I don't get it.

The first three bullet points in the grandparent are the most important ideas associated with the Singularity that everyone needs to know about. They accord with the Singularity Institute's predictions, with the core claims of Yudkowsky's three Singularity "schools", and Robin Hanson's ems scenario.

When I read Carrier's first sentence ("I agree the Singularity stuff is often muddled nonsense."), I assumed he would he would be skeptical of those three points in one of the usual ways. But instead he actually affirms two of them:

machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years

(He doesn't address my second bullet point.) It's not clear what the "Singularity" means to him; but from his criticisms, it seems to have something to do with IQ, and whether machine intelligences will be AIs or uploads or whatever.

My point is that if and when the Singularity Institute succeeds in convincing everyone of the first three bullet points in the grandparent, it will still be fashionable to dismiss the Singularity hypothesis, because everyone has their own strawman version of what the Singularity is.

[-]Nisan12y110

I heard David Pearce speak recently, and he mentioned in passing that he is "not a starry-eyed Singularitarian", and by this he seemed to mean that he thought brain uploading was infeasible. But elsewhere in the talk he spoke casually of utilitronium shockwaves and jupiter brains and pleasure plasma.

I don't even know what the Singularity is anymore. In fact, I never did. I suspect that the disclaimer "I am not a Singularitarian" means "I am not Eliezer Yudkowsky circa 1999, although everything he says nowadays is quite reasonable."

Ah, I get it now! I think Carrier's argument was that computing power will not exceed that as already predicted by Moore's Law, though I'm not exactly sure why, or how that disproves the Singularity.

and by this he seemed to mean that he thought brain uploading was infeasible. But elsewhere in the talk he spoke casually of utilitronium shockwaves and jupiter brains and pleasure plasma

Wait, what? Isn't brain uploading obviously easier than those other things?

I can't speak for him, but he possibly meant that brain uploading won't be feasible for a while, and the posthuman era will be ushered in with implants, gene therapy, drugs, etc.

quantum mechanical limits that can never be overcome, by any technology

That's a confident Lord Kelvin-like statement about physics, and should be treated as a failure of imagination. Physicists agree that they still do not understand the elusive boundary between quantum and classical, so talking about some unmovable limit in that area is pretty silly. Then again, Richard Carrier is a historian, not a physicist.

For example, if you subscribe to the MWI model, gaining access to the googolplex of the worlds created every femtosecond and harnessing their computational resources would effectively remove anything resembling a computational speed limit.

Another example: we might discover that around the Planck scale the world consists of something like unparticles, and their scale-invariance would allow us to miniaturize without a bound.

Given that it took me two minutes to come up with two (admittedly far-fetched) examples of how the "no further advances will be possible in terms of Moore's Law" statement could be wrong, and I am not even an expert in the area, I would discount all his predictions as too poorly thought through to care about, until and unless proven otherwise.

For example, if you subscribe to the MWI model, gaining access to the googolplex of the worlds created every femtosecond and harnessing their computational resources would effectively remove anything resembling a computational speed limit.

So, you can think of this as what quantum computers do, and there's still a pretty normal speed limit. Because all (traditional) interpretations of quantum mechanics run off the exact same math, a good test to apply in these cases is that if it only seems to work in one interpretation, you've probably made a mistake.

And of course, unlike Lord Kelvin's famous claim, we didn't have to discover any new and unexpected physics to build heavier than air flying machines. Carrier's statement is statement literally correct, then - technology will not get you around quantum-mechanical limits, such as they are.

Because all (traditional) interpretations of quantum mechanics run off the exact same math, a good test to apply in these cases is that if it only seems to work in one interpretation, you've probably made a mistake.

True enough, I was referring to the next breakthrough in quantum physics, which, in my estimation, is likely to happen before we reach the current quantum limits, when the interpretations might actually become useful models.

we didn't have to discover any new and unexpected physics to build heavier than air flying machines.

Sometimes technological advances are also unexpected. Remember when 9600bps was considered the limit for phone lines? We are 20000 times faster than that now.

Actually, we're only about five times faster than that, and the real (Shannon) limit for analog phone lines is somewhere in the 60-100 kbps range. It's not fair to compare a modulation capable of using megahertz of bandwidth on a short unfiltered line with a modulation designed explicitly to be bounded by a 3 khz voice path and work across a pure analog channel hundreds or thousands of kilometers in length.

Some background:

9600 bps modems used around 2.7 khz of bandwidth, and had low grade error correction that allowed fairly reliable connectivity but didn't have a huge coding gain. This was extended up to 19200 bps using V.32ter, but that never actually caught on as a standard and work on V.34 began.

V.34, for all its problems and imperfections, to this day remains a work of art. It can use almost the entire available spectrum of an analog line (all of 3.5 khz or so), can compensate for many different types of line distortion, and automatically adjusts to changing line conditions to maintain its connection. It really is a piece of high technology software - and its primary design criteria is to push bits reliably through a comm channel that's not a whole lot better than two tin cans and some string.

(Note that V.90 and V.92 are faster than V.34, but they are digital standards, and make use of much stronger constraints on line quality. They also directly operate on digital data instead of having to do an extra A/D transform. The techniques and assumptions used in these standards are very different from V.34 and allow higher data rates, but when those assumptions are violated, V.90/92 fall back to V.34.)

The max data rate for V.34 is 33.6 kbps. There are a lot of improvements that could be made to V.34 with modern technology, the most significant of which would be use of better error correction. But even with all the resources of mankind thrown at the problem, I would be shocked if we could double the average data rate without loosening the channel constraints.

I agree with your caveats, but my point was, to quote Wikipedia, "For many years, most engineers considered this rate to be the limit of data communications over telephone networks." Yet it only took some extra technology, and no new advances in physics, to increase the effective throughput by 4 orders of magnitude (and counting). It might well happen that the apparent quantum limit on computer performance is only a technological obstacle, not a fundamental one.

History proves otherwise: even people ten times smarter than people like me produce no more extensive or revolutionary technological or scientific output,

I will go out on a limb and assert that this man has a higher-than-average IQ. However, for his statement to be true he would have to be what some call "profoundly mentally retarded". That is, someone with an IQ below 25. To my knowledge, there have been an exceedingly small number of individuals in the range of 10x that IQ score -- amongst them the highest IQ yet recorded. So there are real problems of scale in his underlying assumptions.

Only if you take 'ten times smarter' to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that's not a meaningful operation; it's essentially a type error. The obvious interpretation of 'ten times smarter' within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.

And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in many domains, but being far above average seems to confer little or no additional advantage. (My guess at the explanation: first, beyond a certain point you have to start making trade-offs from areas of brain function that IQ doesn't measure; second, Amdahl's law.)

I agree, that's likely what Carrier was feeling when he wrote that sentence. But that doesn't let him off the hook, because that way is even worse than Logos'! He's using a definition of "times more intelligent" that is only really capable of differentiating between humans, and trying to apply it to something outside that domain.

Amdahl's law

I'm not sure if the following could be already encompassed in Amdahl's law, but I think it was worth a comment. Very intelligent humans still need to operate through society to reach their goals. An IQ of 140 may be enough for you to discover and employ the best tools society puts at your disposal. An IQ of 180 (just an abstract example) may let you recognize new and more efficient patterns, but you then have to bend society to exploit them, and this usually means convincing people not as smart as you are, that may very well take a long time to grasp your ideas.

As an analogy, think being sent into the stone age. A Swiss knife here is a very useful tool. It's not a revolutionary concept, it's just better than stone knives in cutting meat and working with wood. On the other hand, a set of professional electrical tools, while in principle way more powerful, will be completely useless since you will have to find a way to charge their batteries before.

Yup, that's the way I interpreted it too - going from top 1% to top 0.1%.

To me a more natural interpretation from a mathematical POV would use log-odds. So if the author is at the 90% mark, someone 10 times as smart occurs at the frequency of around 1 in 3 billion.

But yeah. In context, your way makes more sense, if only because it's more charitable.

IQ is renormalized to the bell curve by definition, so multiplying it by 10 isn't guaranteed to be a meaningful operation. And since we have no other way to measure intelligence, it's not clear what Carrier meant by "10 times smarter". For some easy interpretations (e.g. 10x serial speed or 10x parallelism) his claim seems trivially wrong.

"10 times" just means "a lot". I'm more curious about what Carrier meant by "smart".

It is a simple way of expressing "a lot," but it's also one that immediately raises the question "is there any meaningful sense in which anyone that smart has actually existed?"

Of course, when Carrier claims that the most remarkably intelligent people do not tend to be the most productive, while it's clear what kind of individuals he has in mind, the obvious next question is "can we design machines that use their intelligence more productively than humans?" Considering how human brains actually work, this sounds like much less of a tall order than making AI that are more intelligent in a humanlike way.

Well, central limit theorem says it's mostly a bell curve among humans (you could make a case for a bigger tail on the low end, but still mostly a bell curve). And you can always identify "0" with a random number generator. So multiplying by 10 seems okay to me.

[-]satt12y00

Well, central limit theorem says it's mostly a bell curve among humans

Only subject to some major assumptions.

Not that major. The assumptions are that there are many small, independent things that affect intelligence. These assumptions are wrong, in that there are many things that do not have a small effect at all. But to the extent that these (mostly bad things) are rare, you'll just see a bell curve with slightly larger tails.

Why can we assume that all the little things affect intelligence independently? Are synergies obviously rare, and how rare do they have to be for the central limit theorem to apply? In the simplest alternative model I can think of, incremental advances could be multiplicative instead of additive, which gives a log-normal distribution instead of a bell curve. This case is uninteresting because you could just say you're measuring e^intelligence instead of intelligence, but I can imagine more complicated cases.

Side note: I think it is not well known that for the quintessential normally distributed random variable, human height, the lognormal distribution is in fact an equally good fit. And on the other end of the variance spectrum: I became biased toward the lognormal distribution when I observed that it is a much better fit for social network degree distributions than the much-discussed power-law. It is a very versatile thing.

Good point.

[-][anonymous]12y50

This is a needlessly pedantic response to a comment which can be dissected in many other ways.

It was the first thing that stood out to me, quite frankly, and it seemed a rather fundamental criticism of the clarity of thought of the author: the vast majority -- it seemed to me -- of his position rested upon a notion that was both faulty and exposed by my original posting.

Any other 'dissection' seems entirely unnecessary, in my eyes, given this.

[-][anonymous]12y10

It isn't obvious to you that this a fairly off the cuff response, and that "10 times" is used in a slightly colloqial way to mean "a lot more"?

It isn't obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well -- if not more so -- as deeply-thought-out ones?

The very fact that "10 times as smart" is intelligible as merely "a lot more" requires certain underlying assumptions about the available space of intelligence, and that addresses the very fundamental assumptions of his writing.

It isn't obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well -- if not more so -- as deeply-thought-out ones?

Declaring that "10 times as smart" must be a reference to IQ points and then proceeding to attempt to back that interpretation up despite the absurdity reveals something a whole lot more significant than a simple reference to "10 times as smart".

History proves otherwise...(similar laws probably hold for other technologies, these are just two that have been proven so far)...that technological progress accelerates is already predicted. The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

Thinking in these terms would be confused. It's a bad sign that he's speaking in them. The patterns found at the very high level of abstraction here don't deserve the label and shouldn't be thought of as such.