Your opportunity to weigh in and get some reasoned views widely heard:

The Men Trying To Save Us From the Machines

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 4:43 AM
[-][anonymous]11y170

Wow, the inferential gap is terrifyingly large.

[This comment is no longer endorsed by its author]Reply

Yeah, the very first (upvoted) comment suggests that we don't have to worry about the superhumanly intelligent AI, because humans "have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked".

Luckily, a soon following comment says that the AI may use a network of machines, and may cooperate with powerful human organizations.

On the other hand the last (upvoted) comment at this time suggests that we don't have to worry about our machine overlords, because the more intelligent someone is, the less able they are to cooperate. Therefore, two superhuman robots would most likely start fighting each other, not the humankind.

Well, at least we have a nice feedback, which could be used to build a FAQ oriented towards this part of population.

Huh, I'm surprised that Slashdot is still around. Sure, it was all the rage in 1997/98, like Reddit was in 2010, and it pioneered a lot of interesting tools and ideas, but it jumped the shark shortly after the IPO. Given how little it is mentioned elsewhere, I assume that it has lost its relevance by now. How large is the inferential distance on Reddit? I recall Luke's AMA, but I'm sure there were more discussions.

How large is the inferential distance on Reddit?

Reddit isn't a single community; it's a forum system with a lot of disparate communities that don't necessarily have all that much to do with each other. Think of it as less like Slashdot, and more like a private Usenet with more moderated groups.

It was still getting more than 3 million visitors per month, as of 2012 (source)

In r/futurology there is a lot more knowledge than average about singularity related issues. Some of people still aren't aware of the dangers of AI, or believe them, or believe AI is even possible. Most the futurists I run into seem to think copying human brains into machines or uploading human intelligence will happen instead of the runaway utility maximizers that are talked about here. And there is also a lot of optimism about the future, that whatever happens will be good in the end or whatever, like the world is a movie plot.

Reddit in general doesn't seem to talk about it much. I wouldn't be surprised if many people didn't even know what the singularity was, let alone take it seriously. This is just my opinion from what I have seen though, a survey or something would be more scientific, if you could get people to take it.

EDIT: Would -> wouldn't

The averaged opinion on Slashdot is useless. The mainstream Slashdot view is unimportant and boring.

That's not helpful at mutliple levels. First, it is extremely easy to fall into a death spiral when one quickly labels those who disagree with you to be "unimportant and boring". Second, given the techie orientation of Slashdot, it is exactly the sort of people that one concerned about these sorts of things should be trying to convince. So at minimum, one should read what they have to say, so one knows what to expect and how to respond in a way that explains the concerns.

[-][anonymous]11y70

Yeah, the comments on there were...not very well thought out. My intuition is that a lot of people went "Hey, this looks insane", then went back and came up with a justification other than "absurdity heuristic", and then didn't bother checking that the reason they came up with really made any sense.

Thus it being our opportunity to add comments to the thread on /. that are well thought out, and thereby improve how the subject is perceived by a not insignificant fraction of sort of people who might be easily persuaded to be interested in this topic.