What about in the case where the first punch constitutes total devastation, and there is no last punch? I.e. the creation of unfriendly AI. It would seem preferable to initiate aggression instead of adhering to "you should never throw the first punch" and subsequently dying/losing the future.

Edit: In concert with this comment here, I should make it clear that this comment is purely concerned with a hypothetical situation, and that I definitely do not advocate killing any AGI researchers.

Sure, but under what conditions can a human being reliably know that? You're running on corrupted hardware, just as I am.

Into the lives of countless humans before you has come the thought, "I must kill this nonviolent person in order to save the world." We have no evidence that those thoughts have ever been correct; and plenty of evidence that they have been incorrect.

Sarah Connor and Existential Risk

by [anonymous] 1 min read1st May 201178 comments

-9


It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.

[redacted]

[redacted]

further edit:

Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.

For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?