"Even a non-anthropomorphic human intelligence still could pose threats to mankind, but they are probably manageable threats. The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming."

More at:

http://www.businessinsider.com/robot-apocalypse-2014-7

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 1:04 AM

But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making.

Oh, ok then.

I'd upvote this twice if I could.

[-][anonymous]10y00

I remember less than a year ago thinking (and posting) the same thing. Until you actually think about it, it does appear from the outside, to be a relatively easily solvable problem.

I still believe It will have to be done, one way or another, though I have no idea how it will be done.

For a simple, dump system it genuinely is easily solvable. The problems you're thinking about mainly crop up with smart systems, or anything self-improving.

Every time you see a media article concerning a topic you are knowledgeable about, it seems wise to gauge the accuracy of the article and update your estimate of the value of getting information from similar sources.

I learned this lesson years ago, back when I was a member of a small Advance Wars forum. Long story short, the admin of that forum was murdered by another by another member who had a sick obsession with the admin's fiance (who was also a member). Respectable news sources took months to get the story straight; all the initial reports were "war game fanatic" this and "online dispute turned murderous" that. Meanwhile, I pieced most of the story together in days by reading the appropriate forum posts. Granted, I probably burned a lot more time doing that than some journalist with a deadline would have (1 to 3 days of non-stop browsing), and I already had a background in that community, but then you realize that this is the kind of attention and care journalists give to every story and your confidence in the media plummets through the floor and into the basement.

Interestingly, this episode caused me to update in the direction of The Daily Mail and The Sun being more accurate than higher-status competitors like the BBC; they did not avoid fnords, but they were the first to get the basic facts of the case right.

I don't read the news anymore.

I thought it'd be a cool story to interview academics and robotics professionals about the popular notion of a robot takeover, but four big names in the area declined to talk to me. A fifth person with robo street cred told me on background that people in the community fear that publicly talking about these topics could hurt their credibility, and that they think the topic has already been explained well enough.

I wonder who these top 4-5 people are, in the author's opinion?

[-][anonymous]10y50

I think this article might benefit from some definitional clarity. I'm going to throw out some potential definitions I'm thinking of (although there may be others)


Basic Robotic Ethics: (Tactical Problem)

There exist difficult decisions of a defined scope that Humans have trouble making right now (Given this intelligence and these images, is the following person a valid military target?)

If technology continues to advance, in the future more of these decisions will be made by robots.

We need to decide how to program robots making these decisions in a way that we can accept.

Basic Artificial Intelligence Ethics: (Strategic Problem)

There exist non human entities which are beyond our ability to easily make friendly by rewriting their code right now. (Example: Corporations, Governments.)

If technology continues to advance, in the future there will exist non human entities which are even more powerful and larger in scope run by artificial intelligences.

Barring advances in friendliness research, these will likely be even more difficult to recode.

Advanced Artificial Intelligence Ethics: (Generational Problem)

There exists difficulty in passing down ethics and improving upon our ethics to the next generation right now.

If technology continues to advance, at some point, the Artificial Intelligence described in Basic Artificial Intelligence Ethics might be programming the robots described in Basic Robotic Ethics and improving on its own programming.

We need to ensure that this occurs in a manner that is friendly.


Now that I've defined what I'm thinking of, I can say that this article writer appears to discussing all of these different problems as if they are all practically the same thing, even though several of the very quotes he's using seem to politely imply he's making scope errors, and I'm guessing some of the people didn't want to talk to him because they received the impression he didn't understand enough basic information to get it.

Amusingly, I just wrote an (I think better) article about the same thing.

http://www.makeuseof.com/tag/heres-scientists-think-worried-artificial-intelligence/

Business Insider can probably muster more attention than I can though, so it's a tossup about who's actually being more productive here.