This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.

http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors? 

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 1:09 PM

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas?

I'm guessing that it's the high-status people like Hawking and Musk speaking out that makes a difference. For the general public, including journalists, the names Bostrom, Hanson, and Yudkowsky don't ring a bell. The next useful step would be one of these high-status people publicly endorsing MIRI's work. There is some of that happening, but not quite in the mainstream media.

The next useful step would be one of these high-status people publicly endorsing MIRI's work.

Yes, though it's not surprising that this should come later and less universally than a more general endorsement of devoting thought to the general problem or Superintelligence specifically, since MIRI's current research agenda is more narrow and hard to understand than the general problem or Superintelligence.

Right now it's an organizational priority for us to summarize MIRI's current technical results and open problems, and explain why we're choosing these particular problems, with a document that has been audience-tested on lots of smart young CS folk who aren't already MIRI fans. That will make it a lot easier for prestigious CS academics to evaluate MIRI as a project, and some of them (I expect) will endorse it to varying degrees.

But there will probably be a lag of several years between prestigious people endorsing Superintelligence or general concern for the problem, and prestigious people endorsing MIRI's particular technical agenda.

But there will probably be a lag of several years between prestigious people endorsing Superintelligence or general concern for the problem, and prestigious people endorsing MIRI's particular technical agenda.

I suspect that it would go something like

  1. Superintelligence is dangerous
  2. Better make sure it's safe
  3. But Godel (/Loeb) says you can never be 100% sure
  4. But MIRI has shown that you can be very very sure

I can't read tone in text so... are statements #3 and #4 jokes? I mean, I straightforwardly disagree with them.

It's much more likely that I misunderstand something basic about what MIRI does.

Okay, fair enough. To explain briefly:

I disagree with (3) because the Lobian obstacle is just an obstacle to a certain kind of stable self-modification in a particular toy model, and can't say anything about what kinds of safety guarantees you can have for superintelligences in general.

I disagree with (4) because MIRI hasn't shown that there are ways to make a superintelligence 90% or more likely (in a subjective Bayesian sense) to be stably friendly, and I don't expect us to have shown that in another 20 years, and plausibly not ever.

Thanks! I guess I was unduly optimistic. Comes with being a hopeful but ultimately clueless bystander.