http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/

Author: Huw Price (Bertrand Russell Professor of Philosophy at Cambridge)

The article is mainly about the Centre for the Study of Existential Risk and the author's speculation about AI (and his association with Jaan Tallinn).  Nothing made me really stand up and think "This is something I've never heard on Less Wrong", but it is interesting to see Existential risk and AI getting more mainstream attention, and the author reproduces tabooing in his willful avoidance of attempting to define the term "intelligence".

 

The comments all miss the point or reproduce cached thoughts with frustrating predictability.  I think I find them to be so frustrating because these do not seem to be unintelligent people (by the standards of the internet at least; their comments have good grammar and vocabulary), but they are not really processing. 

New Comment
23 comments, sorted by Click to highlight new comments since:

PSA: Want to have a positive impact, quickly? Go to the NYT page linked in the OP and leave a comment.

EDIT: More and more nonsense comments, highly upvoted too, but not one from a LW'er. This is how public perception is shaped, and the fruit are so low hanging they should be called potatos.

Edit: Done.

I didn't feel like entering the morass of the comment debate, but I added a simple informational comment, mentioning FHI and SI.

[-]ygert140

This is one of the only mainstream articles I have ever seen that actually "gets the point" about just how AI is dangerous. The author of this one takes AI quite seriously, and understands that an AI can be dangerous even if it is not malicious. This puts this article miles ahead of basically every other similar piece.

The thing about this article that scores the most points with me, though, is the lack of mention of the various works of fiction that try to talk about AI. All to often, the author of this kind of article starts talking about how robots trying to kill us is just like Terminator, or starts talking about how Asimov's three laws of robotics are the kind of thing needed to deal with AI. But the author of this article very wisely avoided the pitfall of generalizing from fictional evidence, so thumbs up from me.

Huw Price is one of my favorite contemporary philosophers. Here is his list of publications, which has interesting papers on decision theory, causation, the arrow of time, the interpretation of quantum mechanics, naturalism, and truth.

I second the recommendation. His work on the arrow of time is classic, of course, but I'd particularly encourage people to read his stuff on naturalism and truth, especially the papers collected in his book Naturalism Without Mirrors (most of which are available for download on his website, I think). A very useful (and, in my opinion, largely correct) counterpoint to the LW orthodoxy on these subjects.

For a quick introduction to his approach, try his three Descartes lectures, available here.

For a quick introduction to his approach, try his three Descartes lectures, available here.

Thanks for that.

I read NWM as well as a number of his other papers earlier this year, and while I enjoyed them a great deal I still struggle to understand the basic motivations for and plausibility/coherence of anti-representationalism/global expressionism. Why not rest content with commonsensical expressionism within restricted domains (culture/psychology/morals)? Total metaphysical and scientific expressionism make little sense to me; it seems obvious that there must be some underlying medium that governs our "discursive practices". I haven't read FFT (waiting on the 2nd ed) but I don't see a semantic/truth theory trumping my confidence in science as a method of representational success.

Would appreciate pointers, thoughts or conversation.

The comments are somewhat disappointing: not very charitable readings of the article and no real attempt to speak to the thrust of the argument.

Also, the oft repeated phrase--w/r/t the risks of technology--that we face "losing our humanity" desperately needs to be taboo-ed.

My draft attempt at a comment. Please suggest edits before I submit it.:

The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).

Here are two websites that go into much greater detail about the problem:

AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/

Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/

The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret).

In a word, IARPA. In a sentence:

The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk/high-payoff research programs that have the potential to provide our nation with an overwhelming intelligence advantage over future adversaries.

They are large and well-funded.

and the author reproduces tabooing in his willful avoidance of attempting to define the term "intelligence".

Whatever the author's motivations, that definition is unnecessary in the present context. As Chalmers noted (sect. 3), the key premises in the argument for the singularity can be formulated without relying on the concept of intelligence. What is needed is instead the notion of a self-amplifying capacity, coupled with the claims that (1) we can create systems that exhibit that capacity to a greater degree than we do and that (2) increases in that capacity will be correlated with changes in some property or properties that we care about.

[-][anonymous]10

If people would just read Intelligence Explosion: Evidence and Import, these debates would get a lot further.

Different audience, different language. I'm just impressed that a NY Op-Ed actually contained these sentences:

My case for these conclusions relies on three main observations. The first is that our own intelligence is an evolved biological solution to a kind of optimization problem, operating under very tight constraints of time, energy, raw materials, historical starting point and no doubt many other factors. [...] Second, this biological endowment, such as it is, has been essentially constant, for many thousands of years. It is a kind of fixed point in the landscape, a mountain peak on which we have all lived for hundreds of generations. [...] my third observation – we face the prospect that designed nonbiological technologies, operating under entirely different constraints in many respects, may soon do the kinds of things that our brain does, but very much faster, and very much better, in whatever dimensions of improvement may turn out to be available.

That's a very gentile nudge toward a radical shift in how intelligence is generally thought of. Simple analogies and simple terminology (except for 'optimization problem', which I think could be understood from the context) for people reading the paper over a bowl of cereal.

[-][anonymous]20

Fair, I liked the article, too.

I was responding to the last paragraph of the OP, not the first.

It's a pretty accessible article. I'm not fully informed on the AI debate, but does anyone know if there are good papers discussing:

 >Any inherent increase in utility to creation of additional sentience
 >Economic impacts of various ways-AI-could-be-introduced?

It seems we have a new avatar for Clippy; the automated IKEA furniture factory