A while ago I wrote briefly on why the Singularity might not be near and my estimates badly off. I saw it linked the other day, and realized that pessimism seemed to be trendy lately, which meant I ought to work on why one might be optimistic instead: http://www.gwern.net/Mistakes#counter-point

(Summary: long-sought AI goals have been recently achieved, global economic growth & political stability continues, and some resource crunches have turned into surpluses - all contrary to long-standing pessimistic forecasts.)

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 6:17 AM

Why could you be optimistic that the Singularity is Near?

Or pessimistic that the singularity is near, depending on whether you expect it to be friendly and whether a near singularity is more likely to be friendly than a late one.

A number of current but disappointing trends may be in the ‘short run’.

The continuation of the solar cell and battery cost curves are pretty darn impressive. Costs halving about once a decade, for several decades, is pretty darn impressive. One more decade until solar is cheaper than coal is today, and then it gets cheaper (vast areas of equatorial desert could produce thousands of times current electricity production and export in the form of computation, the products of electricity-intensive manufacturing, high-voltage lines, electrolysis to make hydrogen and hydrocarbons, etc). These trends may end before that, but the outside view looks good.

There have also been continued incremental improvements in robotics and machine learning that are worth mentioning, and look like they can continue for a while longer. Vision, voice recognition, language translation, and the like have been doing well.

Even Africa, while the population size is exploding, is growing economically - perhaps thanks to universally available cheap cellphones.

A very large chunk of this is directly or indirectly increased resource prices, especially driven by China.

If the latter, if the AI is not run at the exact instant that there is enough processing power available, ever more computing power in excess of what is needed (by definition) builds up.

If the improvement in cost-performance of computation slows dramatically in the next decade or so, this could be a small effect. Kurzweil predicts that silicon CMOS will end and be replaced by something that improves at least as rapidly, generalizing from past transitions (vacuum tubes to transistors, etc), but there are fewer data points to support that claim, and we are much closer to physical limits, with less room for miniaturization. There are new materials with plausibly better properties than silicon, room for new designs (memristors, unreliable computing, wacky new cooling systems), clever 3-D innovations, and so forth. However, a grab bag of such innovations seems less reliable than miniaturization, which automatically improves many dimensions of performance at once.

The continuation of the solar cell and battery cost curves are pretty darn impressive. Costs halving about once a decade, for several decades, is pretty darn impressive. One more decade until solar is cheaper than coal is today,

Do you have a cost curve for the price of watts delivered to the grid, instead of solar cell costs?

Going on the wikipedia price per watt: http://en.wikipedia.org/wiki/Price_per_watt

Solar panels are currently selling for as low as US$0.70c a watt (7-April-2012) in industrial quantities, but the balance of system costs put the systems closer to $4 a watt.

So even if the panels were free, it's still $3.30 per watt to actually make it happen.

BOS costs have so far kept rough pace with cell costs, and the DOE has fairly credible roadmaps and prototypes for further reductions, as with cells. Part of these are regulatory costs (pointless permitting demands and the like) which can be relaxed, and have been in places like Germany.

The continuation of the solar cell and battery cost curves are pretty darn impressive. Costs halving about once a decade, for several decades, is pretty darn impressive. One more decade until solar is cheaper than coal is today, and then it gets cheaper (vast areas of equatorial desert could produce thousands of times current electricity production and export in the form of computation, the products of electricity-intensive manufacturing, high-voltage lines, electrolysis to make hydrogen and hydrocarbons, etc). These trends may end before that, but the outside view looks good.

That sounds promising for us (Australia). We have almost as much desert as we do coal!

How much could be gained from more efficient programs, even if hardware improvements stall out?

A huge amount surely, at least for many problems. There's no guarantee that any particular problem will be subject to vast further software improvements, though.

A huge amount surely, at least for many problems.

Can you expand on this? I suspect this is true for some classes of problems, but I'm sufficiently uncertain that I'm intrigued by your claim about this being "surely" going to happen.

A lot of existing improvement trends would have to suddenly stop, along with the general empirical trend of continued software progress. On many applications we are well short of the performance of biological systems, and those biological systems show large internal variation (e.g. the human IQ distribution) without an abrupt "wall" visible, indicating that machines could go further (as they already have on many problems).

I'm not quite sure software is well short of the performance of biological systems in terms of what software can do with given number of operations per second. Consider the cat image recognition: Google's system has miniscule computing power comparing to human visual cortex, and performs accordingly (badly).

What I suspect though, is that the greatest advances in speeding up technological progress, would come from better algorithm that works on well defined problems like making better transistors - something where even the humans make breakthroughs not by verbally doing some i think therefore i am philosophy in their heads but by either throwing science at the wall and seeing what sticks, or by imagining it in their heads, visually, trying to imitate the non-intelligent simulator. Likewise for the automated software development; so much of the thought that human does to do such tasks is, really, unrelated to this human capacity to see meaning and purpose to life, or the symbol grounding or anything of this kind that makes us fearsome, dangerous, survival machines - things you don't need to make for automated programming software.

Why would you expect the opposite? Tight lower bounds have not been proven for most problems, much less algorithms produced which reach such bounds, and even in the rare cases where they have been, then the constant factors could well be substantially improved. And then there are hardware improvements like ASICs, which are no joking matter. I collected just a few possibilities (since it's not a main area of interest for me as it seems so obvious that there are many improvements left) in http://www.gwern.net/Aria%27s%20past,%20present,%20and%20future#fn3

I'm not sure really. The conjectured limits in some cases are strong. Computational complexity is unfortunately an area where we have a vast difference between what we suspect and what we can prove. And the point about improvements in constant factors is very well taken- it is an area that's often underappreciated.

But at the same time, these are reasons to suspect that improvements will exist. Carl's comment was about improvement "surely" occurring which seems like a much stronger claim. Moreover, in this context, while hardware improvements are likely to happen, they aren't relevant to the claim in question which is about software. But overall, this may be a language issue, and I may simply be interpreting "surely" as a stronger statement than it is intended.

Given the sheer economic value of improvements, is there any reason at all to expect optimization/research to just stop, short of a global disaster? (And even then, depending on the disaster...)

No, not particularly that I can think of. The only examples where people stop working on optimizing a problem is when the problem has become so easy that it simply doesn't matter to optimize further, but such examples are rare, and even in those sorts, further optimization does occur just at a slower place.

[-][anonymous]12y10

One more decade until solar is cheaper than coal is today, and then it gets cheaper

And that increases the odds we'll survive until a singularity.

How does it substantially impact that probability?

[-][anonymous]12y00

One of the ways we can kill ourselves is global warming. Replacing coal power with solar power will reduce one of the causes of global warming -- namely, the greenhouse gases emitted from coal plants.

How likely is it for global warming to be an existential risk threat? This seems unlikely. It may well be that global warming will contribute to existential risk in a marginal fashion if it forces less resources to be spent on existential risk issues or makes war more likely, but that seems like a much more roundabout issue, and by that logic many other technologies would fall into the same category.

It depends what you mean by an existential threat.

I think there's a reasonable chance that global warming (combined with other factors; biosphere degradation, resource depletion, unsustainable farming, lack of fresh water, increasing war over increasingly limited resources, ect), may cause our current civilization to collapse.

If our civilization collapses, what are the odds that we'll recover, and eventually get back up to where we are now? I don't know, but if our civilization collapses and we're left without modern tools in a world in the middle of an ongoing mass extinction that we started, things start to look really dodgy. In any case, we don't know what percentage of intelligent species go from being merely intelligent to having an advanced technology; it could be that we just passed The Great Filter in the past 200 years or so (say, at the moment the industrial revolution started), in which case losing that advance and passing back through it in the other direction would dramatically lower our chances of becoming a space-faring civilization.

Of course, if we reach a sufficiently high level of technology before the other problems I talked about kick in, then they're all solvable.

That doesn't seem even remotely likely; as I understand it, the Earth has been much hotter than now many times without turning into Venus.

[-][anonymous]12y00

It doesn't have to turn the Earth into Venus for unusually-rapid climate shifts to destabilize geopolitics badly, exceed system tolerances in infrastructure, consume an ever-growing portion of the economy either by increased loss and waste or in efforts at mitigation, and thereby effectively amp up the power of many other forms of global X-risk (or just contribute directly to X-risk by numerous small factors, none of which would by itself be capable of overwhelming the system, but which collectively undermine it).

[-][anonymous]11y00

Impossible to estimate.

[This comment is no longer endorsed by its author]Reply

(the US now exports energy!)

Some Googling suggests that this is based on a misreading of the US being a net refined oil products exporter, e.g., http://www.hcn.org/blogs/goat/us-is-net-energy-exporter-psych and http://www.energybulletin.net/stories/2012-04-20/myth-us-will-soon-become-oil-exporter

Why I think we might have some real general AI surprisingly soon (before 2030), in spite of disillusionment with past AI projections: more smart people than ever are working with access to more resources to create AI prerequisites (which have economic value in their own right), although the number of smart people earnestly working on general AI directly hasn't increased as much, since it has become apparent that GAI is not low-hanging fruit.

The resources I'm thinking of:

  1. faster/cheaper hardware

  2. research is widely disseminated and cheaply available, in some cases including source code. if the research is good, it should compound.

  3. slightly better programming software in general

  4. collaboration software (skype/email/web, distributed version control, remote shells) vs. slow-paced journals/conferences.

However, all these were probably all anticipated 60 years ago (and are the basis for what now seem like overly optimistic year-2000 projections).

For example, DNA sequencing has been plummeting and sequencing a whole human genome will likely be <$100 by 2015; this has been an incredible boon for basic research and our knowledge of the world, but so far the applications have been fairly minimal

It's hard to apply until people can afford to be tested. When more people are tested, network effects become possible. And I thought I saw a recent article about successful use in the context of cancer research - scan normal and cancerous cells, and find the difference. There've been those kind of article for years, but this one showed a success.

Also, the fabrication of manipulation of DNA is just getting off the ground. When life forms can be routinely programmed and tested, we're likely to see some very interesting and useful results.

Watson just won, and they're starting to turn his attention to less trivial questions.

All the life science XPrizes are amazing.

You've pointed out some of the improvements in online learning. One big boost to AI progress will be the hundreds of thousands getting AI training online.

Singularity? Don't know and don't care. Technological progress - immense and accelerating.

Two additional promising fields and one small critique. The first case I want to make is prosthetics: the ease with which scientist and engineers nowadays interface with the human central nervous system is very promising. Artificial retinas, robotic arms controlled through implanted electrodes, rats whose shortterm/longterm-memory transfer was substituted by a chip.
Photovoltaics is the other one; prices are in free fall and no end in sight. After a long time of silicium-shortage, huge production capabilities have come online, producers are in heavy competition, some form of grid parity is reached, new innovations are continuing to stream out of the labs into the fabs.

My critique: I do believe it overly optimistic of you to conclude that peak oil has been significantly delayed by new techniques; and even believe Peak Oil to have already happened.

I do believe it overly optimistic of you to conclude that peak oil has been significantly delayed by new techniques; and even believe Peak Oil to have already happened.

Oil production certainly didn't peak in or before 2011; so do you mean this year, or are you using a different definition?

One of Buffet’s classic sayings is

Buffett. With two T's.

Nitpick:

Peak Oil continues to be delayed by new developments like fracking and resultant gluts of natural gas

More energy is good. On the other hand, fracking has less than ideal consequences (mostly pollution, and depletion of ground water, which goes down the fractures caused by fracking). Until we have the crazy technology required to prevent or undo the damage done by this, this doesn't sound like a good trade-of.

I have much more hope in the exploitation of sunlight.

More importantly, unconventional oil and gas are getting produced in large quantities now in large part because of high prices, high enough to justify expensive and difficult extraction processes. Unless costs fall incredibly massively, fracking will not bring back the cheap oil of the 20th century.

Money is the straw utility function. There lies my hope in sunlight: if prices continue to fall down, it could drive fossil fuels (coal, oil, gas, uranium…) out of business. Geothermal energy looks cool, and nuclear fusion would be cooler still. But they seem more distant than sunlight.

One should also at least ponder an unexpected quick way to the intelligence explosion.

Maybe not a very probable, but a possible outcome. Might be odd, it hasn't happened already, like a dropped bomb which has not detonated. Yet.

I know. It is 1 percent or there about possibility, still it should be examined.

It's always possible that running what we think of as a human intelligence requires a lot less actual computation than we seem to generally assume. We could already have all the hardware we need and not realize it.

I remember reading somewhere that many computer applications are accelerating much faster than Moore's law because we're inventing better algorithms at the same time that we're inventing faster processors. The thing about algorithms is that you don't usually know that there's a better one until somebody discovers it.

Kurzweil has an example of a task with 43,000x speedup over some period, more than Moore's Law, that is often mentioned in these discussions, and might be what you're thinking of. It was for one very narrow task, cherrypicked from a paper as the one with by far the greatest improvement. It's an extremely unrepresentative sample selected for rhetorical effect. Just as Kurzweil resolves ambiguity overwhelmingly in his favor in evaluating his predictions, he selects the most extreme anecdotes he can find. On the other hand, in computer chess and go software progress seems to have been on the same order as Moore's law too.

ETA: there were still improvements of many thousandfold over the period considering the rest of the paper.

Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science."

It was for one very narrow task

I agree this 43k improvement is not representative of algorithms research in general (sorting is not 43k faster than in the 1960s, for example), but let's not call it 'very narrow': linear programming optimization (and operations research in general) is important and used all over the place in numerous applications in every industry. We owe a good deal of our present wealth to operations research and linear programming.

There have been great improvements in linear programming overall, but the paper talked about applications to many areas, and Kurzweil cited the one with the greatest realized speedup, which was substantially unrepresentative.

This has more numbers.

The problem is that you conflate into general intelligence both the problem solving aspect and the 'wanting to make real something' aspect, which requires some extra secret sauce that won't appear out of thin air and which nobody (save for few lunatics) wants.

Consider software that can analyze and find inputs that result in maximum of a large class of computable functions. You can make better microchips with it, you can make cure for cancer, you can make a self driving car with it. You can even use it to design a paperclip factory. What it does not do, what it can not do without a lot of secret sauces which it can't design, is run amok paperclip maximizer style. (The closest to running amok is the AIXI with huge number of steps specified, and its dubious this can even self preserve. Or if it can trade some rewards for better sensory input, or trade some rewards for not going blind. As far as self propelled artificial idiocies go, it is incredibly benign for how much computing power it needs and for how much can be done with this much computing power )

What I am expecting is a good progress on making rather general problem solver (but not general mind), with it not working even remotely like the narrow speculations in science fiction say it works. The situation similar to how you imagine some technology to change the life in some very particular way - very privileged hypothesis - and can't see any other way (so the 'i see no alternative = no alternative exist' fallacy happens), and then reality turns to be very different.