[anonymous]9y12

No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating - or have already advocated - the murder of scientific researchers. Should any of them get murdered (and as I pointed out in my original comment, which I later redacted in the hope that as the OP had redacted his post this would all blow over, Ben Goertzel has reported getting at least two separate death threats from people who have read the SIAI's arguments, so this is not as low a probability as we might hope) then the finger will point rather heavily at the people in this thread. Murdering people is wrong, but advocating murder on the public internet is not just wrong but UTTERLY FUCKING STUPID.

No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating - or have already advocated - the murder of scientific researchers.

Huh? People here often advocate to kill a completely innocent fat guy to save a few more people. People even advocate to torture someone for 50 years so others don't get dust specks into their eyes...

4Vladimir_Nesov9yI of course agree with this, but this consideration is unrelated to the question of what constitutes correct reasoning. For example, it shouldn't move you to actually take an opposite side in the argument and actively advocate it, and creating an appearance of that doesn't seem to promise comparable impact.
2wedrifid9yThis is not a sane representation of what has been said on this thread. I also note that taking an extreme position against preemptive strikes of any kind you are pitting yourself against the political strategy of most nations on earth and definitely the nation from which most posters originate. For that matter I also expect state sanctioned military or paramilitary organisations to be the groups likely to carry out any necessary violence for the prevention of AGI apocalypse.

Sarah Connor and Existential Risk

by [anonymous] 1 min read1st May 201178 comments

-9


It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.

[redacted]

[redacted]

further edit:

Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.

For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?