At SingularityHub. Promising title; disappointing content. Author proceeds by pure perceptual analogy with the Asimovian Three Laws alluded to; argues that the mere possibility of self-modification renders AI uncontrollable - without considering the possibility of fixed points in the goal computation. ("Do you really think it can be constrained?" - i.e. argument from limited imagination.)

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 7:39 AM

Well gee, thanks for sending me to something disappointing :P

The article says:

Apologies to Hanson, Breazeal, Yudkowsky and SIAI for paraphrasing their complex philosophies so succinctly, but to my point: these people are essentially saying intelligent machines can be okay as long as the machines like us. Isn’t that the Three Laws of Robotics under a new name? Whether it’s slave-like obedience or child-like concern for their parents, we’re putting our hopes on the belief that intelligent machines can be designed such that they won’t end humanity. That’s a nice dream, but I just don’t see it as a guarantee.

I don't think anyone is presenting any guarantees at this stage.

We cannot control intelligence – it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities.

A shout out for all the human intelligences in the audience who don't think they can be controlled! Applause lights, unfortunately false. Human intelligence can be controlled incredibly effectively: education, morality, patriotism, religion, employment, "eld science", corporations, drugs, psychological conditioning...