LESSWRONG
LW

Personal Blog

-4

LINK: Can intelligence explode?

by jake987722
13th Mar 2012
1 min read
4

-4

Personal Blog

-4

LINK: Can intelligence explode?
4Grognor
8jake987722
-1John_Maxwell
7Grognor
New Comment
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:18 PM
[-]Grognor13y40

We already got this link.

Reply
[-]jake98772213y80

Damn. I quickly checked to see if this link had been posted, but I guess I didn't look far back enough--I assumed that if it had been, it would have been very recently, but apparently it was actually posted 10 days ago... my bad.

Reply
[-]John_Maxwell13y-10

I agree that jake987722 should not get karma for this post, but do we really have to vote him down? Should we push him in the direction of not being able to post because he failed to see that link? I think Less Wrong is harsh enough on newcomers without this sort of behavior...

(Post was at -2 when I found it.)

Reply
[-]Grognor13y70

In the default settings, articles voted -3 or below are not shown. It's about visibility of useless articles and signal to noise ratio, not about punishing people.

Reply
Moderation Log
More from jake987722
View more
Curated and popular this week
4Comments

I thought many of you would be interested to know that the following paper just appeared in Journal of Consciousness Studies:

"Can Intelligence Explode?", by Marcus Hutter. (LINK HERE)

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

I have only just seen the paper and have not yet thread through it myself, but I thought we could use this thread for discussion.