The NPR show All Things Considered did a short story on the Singularity, including interviews with Eliezer Yudkowsky and others involved with SIAI:

http://www.npr.org/2011/01/11/132840775/The-Singularity-Humanitys-Last-Invention

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 8:43 AM

The segment takes the idea more seriously than previous coverage, and the trend toward "SIAI may be fringe, but mainstream AI researchers are starting to think about possible dangers as well" is a good thing as well.

On the other hand, that EY's one quote included the phrase "intergalactic civilization" was a cringe moment for me- it sounds too much like sci-fi to register with even the brightest and most rational of NPR's demographic.

EY's one quote included the phrase "intergalactic civilization" was a cringe moment for me- it sounds too much like sci-fi to register with even the brightest and most rational of NPR's demographic.

If you really wanted to get AI researchers and other academics to take you serious then making the term Singularity part of the name of your charity is a bad idea in the first place. Getting the mainstream to support you might work vice versa though, almost nobody will care about some academic treatment of friendly AI but a lot will read on when someone starts talking about an intergalactic civilisation being destroyed by superhuman AI.

This comment raised an interesting question: is it more important to get noticed/supported by other AI researchers, or by the general public?

I expect that the AAAI have cold feet - since to them, the SIAI probably looks like a bunch of amateur upstarts who are spreading FUD about everyone else's efforts being dangerous.

Funding advanced machine intelligence research a decade or so before it has much of a chance to pay off is not easy, and - from the point of view many others in the field - the SIAI can easily appear to be be hindering as much as helping:

I've seen a number of researchers complaining about this - most recently Eray Ozkural:

But now, your people are making AGI code look like a nuclear warhead. Or worse, because it could go off on its own! Fear! People!! Fear!!!!! Are you trying to prevent us from getting any funding for code’s sake?

It does look as though that is part of the plan to me.

Exactly. My understanding is, AGI researchers and SIAI are inevitably going to be at odds because they have almost the opposite goals. SIAI is mostly concerned with preventing catastrophe, and the AGI researchers want to achieve big things as quickly as possible (to attract grants/private funding, etc).

I am not sure they are so very different. SIAI is one of many organisations who wants to be in at the birth of the future superintelligence. Each player realises the significance of getting there first. Presumably, as we get closer, the FUD marketing - and the teams jabbing at each other - will ramp up.

[-][anonymous]13y60

It really depends who you want to attract. I bet there are more people who like science fiction than people with strong interests in serious science and policy. But they may not be the most influential or useful people.

If you want to get attention from the best of the general public, it seems to me there are two obvious routes. One, "SIAI is in the same category as artificial intelligence research." Two, "SIAI is in the same category as public policy advocacy." Doing the first would involve talking to and winning over some academic scientists. Doing the second, which I see less discussion of, would involve packaging the project as protection against potentially dangerous technologies, and associating it with people who worry about cyberattacks, biological weapons, lab-grown epidemics, and so on.

Not only is the phrase "intergalactic civilization a turn-off, but the whole end-of-the-world trope could lead many listeners to picture SIAI wearing tinfoil hats. Get the public and the commentariat thinking about the more obvious threats first - such as massive structural unemployment. Once they learn enough, they will stumble upon the right evidence.

I tried to steer the discussion that way on NPR's website. No luck.

Question: If the radio piece had mentioned "lesswrong.com" then I'd expect a small wave of people arriving here to check things out. If such a mention were caused to happen via prepared effort, is that hypothetical outcome likely to have been better or worse than the thing that actually happened?

It led off with HAL 9000? I... am not impressed.

(I lolled at bit at "they hate it at the Institute when you quote Terminator," right after they quote Terminator.)

You're right that Terminator's depiction of AI is awful, but HAL doesn't seem that bad at all, at least as far as mainstream depictions go.

It was more the "generalizing from fictional evidence" part than the specific reference to HAL. I do agree with you that it's a decent treatment.

Interesting. It's cool to get the outside view, and to have these ideas spreading. This is the third piece of mainstream media to cover Singularity or Transhumanist issues I've seen this past month. One was a section on uploading in a piece on how our online personas remain available after we're dead, and the other was posted here.

This piece struck me as more pessimistic than the others, but not unrealistically so. It also took the Singularity more seriously than the Carl Zimmer piece. So, progress.

Oops, you switched the parentheses and the brackets.

Drat. Fixed. Thanks for telling me!