[ Question ]

Best arguments against worrying about AI risk?

byChris_Leong7mo23rd Dec 201816 comments

15


Since so many people here (myself included) are either working to reduce AI risk or would love to enter the field, it seems worthwhile to ask what are the best arguments against doing so. This question is intended to focus on existential/catastrophic risks and not things like technological unemployment and bias in machine learning algorithms.

New Answer
Ask Related Question
New Comment

4 Answers

Some outside view arguments:

  • Forecasting is hard, and experts disagree with the AI risk community (though this is less true now, and may be a difference in values rather than beliefs, i.e. longtermism vs. common-sense morality)
  • Past doomsday predictions have all not materialized. (Anthropics might throw a wrench in this, but my impression is that predictions of non-existential catastrophes have been made but haven't happened.) Even if you think you should focus solely on x-risk, this suggests you should focus on the ones that a large group of people agree are x-risks.

And some inside view arguments:

  • The underlying causes of x-risk scenarios also lead to problems before superintelligence, eg. reward hacking. We'll notice these problems when they occur and correct them. (Note that they won't be corrected in the near term because the failures are so inconsequential that they aren't worth correcting.)
  • Powerful AI will be developed by large organizations (companies or governments), which tend to be very risk averse and so will ensure safety automatically.
  • Timelines are long and we can't do much useful work on the problem today.

There are probably more I find compelling, I did not spend much time on this.

Some of what seems to me to be good arguments against entering the field, depending on what you include as the field.

  • We may live in a world where AI safety is either easy, or almost impossible to solve. In such cases it may be better to work e.g. on global coordination or rationality of leaders
  • It may be the case the "near-term" issues with AI will transform the world in a profound way / are big enough to pose catastrophic risks, and given the shorter timelines, and better tractability, they are higher priority. (For example, you can imagine technological unemployment + addictive narrow AI aided VR environments + decay of shared epistemology leading to unraveling of society. Or narrow AI accelerating biorisk.)
  • It may be the case the useful work on reduction of AI risk requires very special talent / judgment calibrated in special ways / etc. and the many people who want to enter the field will mostly harm the field, because the people who should start working on it will be drowned out by the noise created by the large mass.

(Note: I do not endorse the arguments. Also they are not answering the part about worrying.)

Scott Aaronson wrote about it last year, and it might count as an answer, especially the last paragraph:

I suppose this is as good a place as any to say that my views on AI risk have evolved.  A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game.  But now that we know these things, I think intellectual honesty requires updating on them.  And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.”  (Related: Eliezer Yudkowsky’s There’s No Fire Alarm for Artificial General Intelligence.)
Who knows how much of the human cognitive fortress might fall to a few more orders of magnitude in processing power?  I don’t—not in the sense of “I basically know but am being coy,” but really in the sense of not knowing.
To be clear, I still think that by far the most urgent challenges facing humanity are things like: resisting Trump and the other forces of authoritarianism, slowing down and responding to climate change and ocean acidification, preventing a nuclear war, preserving what’s left of Enlightenment norms.  But I no longer put AI too far behind that other stuff.  If civilization manages not to destroy itself over the next century—a huge “if”—I now think it’s plausible that we’ll eventually confront questions about intelligences greater than ours: do we want to create them?  Can we even prevent their creation?  If they arise, can we ensure that they’ll show us more regard than we show chimps?  And while I don’t know how much we can say about such questions that’s useful, without way more experience with powerful AI than we have now, I’m glad that a few people are at least trying to say things.
But one more point: given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later.  Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI.  Sure, superintelligence is scary, but superstupidity has already been given its chance and been found wanting.

A few others have pointed to this, but I'd say more fully that to me the main reason not to worry about AI risk is that AI alignment is likely intractable in theory (although in practice I think we can dramatically reduce the space of AI minds we might create by removing lots of possibilities that are obviously unaligned and thus increase the probability we end up with an AI that's friendly to humanity anyway). This problem is that we're asking an AI to reason correctly about something we don't have a good way to observe and measure (human values), and those methods we do have produce unwanted results via Goodharting. Of course this is, as I think of it, a reason not to worry, but it is not a reason not to work on the problem, since even if we will inevitable live with non-trivial risk of AI-induced extinction, we can still reduce that risk.

Not exactly what you were meaning to ask for maybe but I think contributes some flavor to the idea we can still work on a problem and care about it even if we don't much worry about it.