A new article which I wrote just appeared in Hebrew in Galileo, Israel's top popular science magazine, in hardcopy.
It is titled "Superhuman Intelligence, Unhuman Intelligence" (Super- and  un- are homophones in Hebrew, a bit of wordplay.)
You can read it here. [Edit: Here's an English version on the Singularity Institute site.]
The cover art, the "I Robot" images, and the tag line ("Artificial Intelligence: Can we reign in the golem") are a bit off; I didn't choose them; but that's par for the course.
To the best of my knowledge, this is the first feature article overviewing FAI in any popular-science publication (whether online or  hardcopy).
Here is the introduction to the article. (It avoids weasel words, but all necessary caveats are given in the body of the article).
In coming  decades, engineers will build an entity with intelligence on a level which can compete with humans. This entity will want to improve its own intelligence, and will be able to do so. The process of improvement will repeat, until it reaches a level far above  that of humans; the entity will then be able to achieve its goals efficiently. It is thus essential that its goals are good for humanity. To guarantee this, it is necessary to define the correct goals before this intelligence is built.

New to LessWrong?

New Comment
32 comments, sorted by Click to highlight new comments since: Today at 12:23 AM

This seems more appropriate to discussion than main. (I'll try to comment more when I've had time to read the article. My Hebrew is not amazing so reading this will take time.)

Artificial Intelligence: Can we reign in the golem

Love it from purely humorous angle. Now UFAI has a cultural flavor!

Good point, now that you mention it, the tag-line is not too bad. The golem is a legendary example of an intelligent, though not superintelligent, entity which poses danger as it carries out its instructions to the letter. Luke and Louie used a golem for their though experiment.

And though we cannot hope to control our future superintelligence, the tag-line is at least phrased as a question.

Good stuff.

Any idea how well the article was received?

It just came out, but I am certainly interested in seeing how it is received.

I think that pop science magazines have an important role in giving social validation to new scientific ideas.

Academic publishing is so big that it is hard to tell what ideas are good; ordinary popular media have little concern for accuracy in reporting on science, but good pop science magazines often do a pretty good job of gatekeeper in explaining the true state of real science.

Good work!

Move to Discussion.

As an aside, I seriously think we need to start considering general AI stuff as off-topic again.

It's not "general AI stuff", it's about Friendly AI, as suggested by the post's title.

Joshua, congrats on publishing this.

In this context I put Friendly AI in the category of "general AI stuff".

The important part here is that it's about FAI, not about the art of human rationality.

As long as we're allowing some discussion on off-topic subjects that are not "the art of human rationality", can we please get rid of the useful off-topic subjects last?

I'd rather diminish the discussion of off-topic subjects, and get rid of the noisiest topics first.

AI and FAI are notable because people like to talk about them a lot for something off-topic.

I'd be more likely to agree if there were somewhere else to productively discuss Singularity/FAI issues.

I agree that there should be somewhere else to discuss those things.

Does that still exist?

The site's there, but I don't know how active the community still is.

If that community couldn't sustain itself, is there reason to think a subreddit here would prosper any better?

The problem with discussion of AGI, nanotechnology, and all the other "Shock Level N" memes for N ≥ 2 is that there is no real subject matter. For the most part it's just verbal geekery about cool ideas that no-one is actually doing anything about, because they're too far beyond current capabilities. Fine to engage in for a while at an SF con or in a pub with other geeks, but there's only so long you can be at a party before realising you're just seeing the same ideas over and over and it's time to leave.

I never read SL4 -- is that an accurate description of why it died?

I've entertained a similar hypothesis myself.

As for its relation to SL4, I'd say that it sounds roughly right - I wouldn't go as far as to say that there was "no real subject matter", but it's true that the list eventually ran out of worthwhile things to say that hadn't been already discussed.

Aha - a relevant discussion was had on the list about a year ago, hereabouts.

We really ought to have a subreddit if people really want to talk about sl4/fai topics here. A different site on the same engine would be even better.

I'm 100% for this. If there were such a site I would probably permanently relocate there.

[-][anonymous]12y80

As an aside, I seriously think we need to start considering general AI stuff as off-topic again.

Perhaps the Singularity Institute and the Center for Applied Rationality should have separate community blogs?

This is theoretically a good idea, but I think at present there is so much crossover between the communities that it would be unwise to make such a move.

What about having softish separation like between main and discussion posts? Accounts and karma and code is shared, but different sections with different articles.

Besides, the subject matters of the two have significant overlap. Where would you put formal analysis/development of various decision theories, for example?

Speaking of which, where is all that good stuff put as it stands?

In all seriousness, you may want to try Stuart Armstrong's user page.

Put it on either and link it from the other one.

I like AI stuff.

I hope you realize that your liking of the off-topic subject is not relevant to this discussion.

As do I.

As an aside, I seriously think we need to start considering general AI stuff as off-topic again.

+1

It's interesting, but it's not something that fits the tagline: "refining the art of human rationality".