The notion of AI risk might also have seemed less compelling in a period before widely networked computers. Sometimes people say that you could just a pull a plug on a computer that misbehaved, which seems a little silly in an era where physical installations can be damaged by hacking and where it can be impossible to get a software uploaded to the Internet removed... but it probably felt a lot more plausible in an era where networking was mostly limited to military systems at first, and university networks later.

[ Question ]

Did AI pioneers not worry much about AI risks?

by lisperati 1 min read9th Feb 20209 comments

42


It's seems noteworthy just how little the AI pioneers from the 40s-80s seemed to care about AI risk. There is no obvious reason why a book like "Superintelligence" wasn't written in the 1950s, but for some reason that didn't happen... any thoughts on why this was the case?

I can think of three possible reasons for this:

1. They actually DID care and published extensively about AI risk, but I'm simply not well enough schooled on the history of AI research.

2. Deep down, people involved in early AI research knew that they were still a long long way from achieving significantly powerful AI, despite the optimistic public proclamations that were made at that time.

3. AI risks are highly counter-intuitive and it simply required another 60 years of thinking to understand.

Anyone have any thoughts on this question?

New Answer
Ask Related Question
New Comment

1 Answers

Would you say it's taken particularly seriously NOW? There are some books about it, and some researchers focusing on it. A very tiny portion of the total thought put into the topic of machine intelligence.

I think:

1) about the same percentage of publishing on the overall topic went to risks, then as now. There's a ton more on AI risks now, because there are 3 orders of magnitude more overall thought and writing on AI generally.

2) This may still be true. Humans aren't good at long-term risk analysis.

3) Perhaps more than 60 years of thinking will be required. We're beginning to ask the right questions (I hope).