Will the world's elites navigate the creation of AI just fine?

by lukeprog1 min read31st May 2013266 comments


AI RiskExpertise (topic)
Personal Blog

One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?

Some reasons for concern include:

  • Otherwise smart people say unreasonable things about AI safety.
  • Many people who believed AI was around the corner didn't take safety very seriously.
  • Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.
  • AI may arrive rather suddenly, leaving little time for preparation.

But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):

  • If AI is preceded by visible signals, elites are likely to take safety measures. Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.
  • AI is likely to be preceded by visible signals. Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. "Human-level performance at X" benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.
  • Therefore, safety measures will likely be taken.
  • If safety measures are taken, then elites will navigate the creation of AI just fine. Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI." Arms races not insurmountable.

The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)

Personally, I am not very comforted by this argument because:

  • Elites often fail to take effective action despite plenty of warning.
  • I think there's a >10% chance AI will not be preceded by visible signals.
  • I think the elites' safety measures will likely be insufficient.

Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.

In particular, I'd like to know:

  • Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
  • What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk (for the purposes of illuminating elites' likely response to AI risk)?
  • What are some good studies on elites' decision-making abilities in general?
  • Has the increasing availability of information in the past century noticeably improved elite decision-making?


266 comments, sorted by Highlighting new comments since Today at 10:37 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

RSI capabilities could be charted, and are likely to be AI-complete.

What does RSI stand for?

"recursive self improvement".

Okay, I've now spelled this out in the OP.

Lately I've been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.

I'll post quotes from the audiobooks I listen to as replies to this comment.

7lukeprog7yFrom Watts' Everything is Obvious [http://www.amazon.com/Everything-Obvious-Once-Know-Answer-ebook/dp/B004DEPHGQ/] :
7lukeprog7yMore (#1) from Everything is Obvious:
5lukeprog7yMore (#2) from Everything is Obvious:
3lukeprog7yMore (#4) from Everything is Obvious:
1lukeprog7yMore (#3) from Everything is Obvious:
6lukeprog7yFrom Rhodes' Arsenals of Folly [http://www.amazon.com/Arsenals-Folly-Richard-Rhodes-ebook/dp/B000W93DEO/]:
7lukeprog7yMore (#3) from Arsenals of Folly: And: And: And: And:
1shminux7yAmazing stuff. Was the world really as close to a nuclear war in 1983 as in 1962?
6lukeprog7yMore (#2) from Arsenals of Folly: And: And, a blockquote from the writings of Robert Gates:
5lukeprog7yMore (#1) from Arsenals of Folly: And: And: And:
4lukeprog7yMore (#4) from Arsenals of Folly:
5lukeprog7yFrom Lewis' Flash Boys: So Spivey began digging the line, keeping it secret for 2 years. He didn't start trying to sell the line to banks and traders until a couple months before the line was complete. And then:
3lukeprog7yMore (#1) from Flash Boys: And: And:
5lukeprog7yThere was so much worth quoting from Better Angels of Our Nature that I couldn't keep up. I'll share a few quotes anyway.

More (#3) from Better Angels of Our Nature:

let’s have a look at political discourse, which most people believe has been getting dumb and dumber. There’s no such thing as the IQ of a speech, but Tetlock and other political psychologists have identified a variable called integrative complexity that captures a sense of intellectual balance, nuance, and sophistication. A passage that is low in integrative complexity stakes out an opinion and relentlessly hammers it home, without nuance or qualification. Its minimal complexity can be quantified by counting words like absolutely, always, certainly, definitively, entirely, forever, indisputable, irrefutable, undoubtedly, and unquestionably. A passage gets credit for some degree of integrative complexity if it shows a touch of subtlety with words like usually, almost, but, however, and maybe. It is rated higher if it acknowledges two points of view, higher still if it discusses connections, tradeoffs, or compromises between them, and highest of all if it explains these relationships by reference to a higher principle or system. The integrative complexity of a passage is not the same as the intelligence of the person who wrote it, but the

... (read more)
2[anonymous]5yFurther reading on integrative complexity: Wikipedia [https://en.wikipedia.org/wiki/Integrative_complexity]Psychlopedia [http://www.psych-it.com.au/Psychlopedia/article.asp?id=297]Google book [https://books.google.com.au/books?id=A4aTuvzXPTUC&pg=PA130&lpg=PA130&dq=integrative+complexity+epidemiology&source=bl&ots=_1GyZtrlPu&sig=MH-WUWUYOzX63x9odi8XoNaJHcU&hl=en&sa=X&ved=0CC8Q6AEwAmoVChMIi9HamfzJyAIVZ8amCh3pzQ_0#v=onepage&q=integrative%20complexity%20epidemiology&f=false] Now that I've been introduced to the concept, I want to evaluate how useful it is to incorporate into my rhetorical repertoire and vocabulary. And, to determine whether it can inform my beliefs about assessing the exfoliating intelligence of others (a term I'll coin to refer to that intelligence/knowledge which another can pass on to me to aid my vocabulary and verbal abstract reasoning - my neuropsychological strengths which I try to max out just like an RPG character). At a less meta level, knowing the strengths and weaknesses of the trait will inform whether I choose to signal it or dampen it from herein and in what situations. It is important for imitators to remember that whatever IC is associated with does not neccersarily imply those associations to lay others. strengths * conflict resolution (see Luke's post) As listed in psycholopedia: * appreciation of complexity * scientific profficiency * stress accomodationo * resistance to persuasion * prediction ability * social responsibliy * more initiative, as rated by managers, and more motivation to seek power, as gauged by a projective test weaknesses based on psychlopedia: * low scores on compliance and conscientiousness * seem antagonistic and even narcissistic based on the wiki article: * dependence (more likely to defer to others) * rational expectations (more likely to fallaciously assume they are dealing with rational agents) Upon reflection, here are my conclusions: * high integrative compl
9lukeprog7yMore (#4) from Better Angels of Our Nature:
0[anonymous]5yUntrue unless you're in a non-sequential game True under a utilitarian framework and with a few common mind-theoretic assumptions derived from intuitions stemming from most people's empathy Woo
3lukeprog7yMore (#2) from Better Angels of Our Nature:
3lukeprog7yMore (#1) from Better Angels of Our Nature:
4lukeprog7yFrom Ariely's The Honest Truth about Dishonesty:
1lukeprog7yMore (#1) from Ariely's The Honest Truth about Dishonesty: And:
0lukeprog7yMore (#2) from Ariely's The Honest Truth about Dishonesty: And:
4lukeprog7yFrom Feynman's Surely You're Joking, Mr. Feynman [http://www.amazon.com/Surely-Youre-Joking-Mr-Feynman-ebook/dp/B003V1WXKU/]:
4lukeprog7yMore (#1) from Surely You're Joking, Mr. Feynman: And: And:
4lukeprog7yOne quote from Taleb's AntiFragile is here [http://lesswrong.com/lw/iyo/the_inefficiency_of_theoretical_discovery/a0uo], and here's another:
2lukeprog7yAntiFragile makes lots of interesting points, but it's clear in some cases that Taleb is running roughshod over the truth in order to support his preferred view. I've italicized the particularly lame part:
3lukeprog7yFrom Think Like a Freak:
3lukeprog7yMore (#1) from Think Like a Freak: And:
3lukeprog7yFrom Rhodes' Twilight of the Bombs [http://www.amazon.com/Twilight-Bombs-Challenges-Dangers-Prospects-ebook/dp/B003F3PKXQ/] :
1lukeprog7yMore (#1) from Twilight of the Bombs: And: And: And: And:
3lukeprog7yFrom Harford's The Undercover Economist Strikes Back [http://www.amazon.com/Undercover-Economist-Strikes-Back-Economy-ebook/dp/B00DMCV624/] : And:
1lukeprog7yMore (#2) from The Undercover Economist Strikes Back: And: And: And:
0lukeprog7yMore (#1) from The Undercover Economist Strikes Back: And:
3lukeprog7yFrom Caplan's The Myth of the Rational Voter [http://www.amazon.com/Myth-Rational-Voter-Democracies-Policies-ebook/dp/B007AIXLDI/] :
3lukeprog7yMore (#2) from The Myth of the Rational Voter:
1Prismattic7yThis is an absurdly narrow definition of self-interest. Many people who are not old have parents who are senior citizens. Men have wives, sisters, and daughters whose well-being is important to them. Etc. Self-interest != solipsistic egoism.
3lukeprog7yMore (#1) from The Myth of the Rational Voter: And:
2lukeprog7yMore (#3) from The Myth of the Rational Voter:
1Prismattic7yAllow me to offer an alternative explanation of this phenomenon for consideration. Typically, when polled about their trust in insitutions, people tend to trust the executive branch more than the legislature or the courts, and they trust the military far more than they trust civilian government agencies. In the period before 9/11, our long national nightmare of peace and prosperity would generally have made the military less salient in people's minds, and the spectacles of impeachment and Bush v. Gore would have made the legislative and judicial branches more salient in people's minds. After 9/11, the legislative agenda quieted down/the legislature temporarily took a back seat to the executive, and military and national security organs became very high salience. So when people were asked about the government, the most immediate associations would have been to the parts that were viewed as more trustworthy.
3lukeprog7yFrom Richard Rhodes' The Making of the Atomic Bomb [http://www.amazon.com/Making-Atomic-Bomb-ebook/dp/B008TRU7SQ/]:
5lukeprog7yMore (#2) from The Making of the Atomic Bomb: After Alexander Sachs paraphrased the Einstein-Szilard letter to Roosevelt, Roosevelt demanded action, and Edwin Watson set up a meeting with representatives from the Bureau of Standards, the Army, and the Navy... Upon asking for some money to conduct the relevant experiments, the Army representative launched into a tirade:
3lukeprog7yMore (#3) from The Making of the Atomic Bomb: Frisch and Peierls wrote a two-part report of their findings:
2lukeprog7yMore (#1) from The Making of the Atomic Bomb: On the origins of the Einstein–Szilárd letter [http://en.wikipedia.org/wiki/Einstein%E2%80%93Szil%C3%A1rd_letter]: And:
0lukeprog7yMore (#5) from The Making of the Atomic Bomb:
0lukeprog7yMore (#4) from The Making of the Atomic Bomb: And: And: And: And:
2lukeprog6yFrom Poor Economics:
2lukeprog7yFrom The Visioneers: And: And: And:
2lukeprog7yFrom Priest & Arkin's Top Secret America [http://www.amazon.com/Top-Secret-America-American-Security-ebook/dp/B004QX07FU/] :
3lukeprog7yMore (#2) from Top Secret America: And, on JSOC: And: And:
2shminux7yI wonder if the security-industrial complex bureaucracy is any better in other countries.
0Lumifer7yWhich sense of "better" do you have in mind? :-)
0shminux7yMore efficient.
0Lumifer7yKGB had a certain aura, though I don't know if its descendants have the same cachet. Israeli security is supposed to be very good.
0lukeprog7yStay tuned; The Secret History of MI6 [http://www.amazon.com/Secret-History-MI6-1909-1949-ebook/dp/B0043RSKIA/] and Defend the Realm [http://www.amazon.com/Defence-Realm-Authorized-History-MI5-ebook/dp/B002UZ5J1S/] are in my audiobook queue. :)
0lukeprog7yMore (#1) from Top Secret America: And:
2lukeprog7yFrom Pentland's Social Physics:
2lukeprog7yMore (#2) from Social Physics: And:
2lukeprog7yMore (#1) from Social Physics: And: And:
2lukeprog7yFrom de Mesquita and Smith's The Dictator's Handbook [http://www.amazon.com/Dictators-Handbook-Behavior-Almost-Politics-ebook/dp/B005GPSLHI/] :
1lukeprog7yMore (#2) from The Dictator's Handbook: And:
1lukeprog7yMore (#1) from The Dictator's Handbook:
2lukeprog7yFrom Ferguson's The Ascent of Money [http://www.amazon.com/Ascent-Money-Financial-History-World-ebook/dp/B0018QQQKS/] :
2lukeprog7yMore (#1) from The Ascent of Money: And:
1gwern7yThe Medici Bank is pretty interesting. A while ago I wrote https://en.wikipedia.org/wiki/Medici_Bank [https://en.wikipedia.org/wiki/Medici_Bank] on the topic; LWers might find it interesting how international finance worked back then.
2lukeprog7yFrom Scahill's Dirty Wars [http://www.amazon.com/Dirty-Wars-battlefield-Jeremy-Scahill-ebook/dp/B00B3M3TS4/] :
1lukeprog7yMore (#2) from Dirty Wars: And: And:
1lukeprog7yMore (#1) from Dirty Wars: And: And: And:
0[anonymous]5yForeign fighters show up everywhere. And now there's the whole Islamic State issue. Perhaps all the world needs is more foreign legions doing good things. The FFL is overrecruited [http://www.nybooks.com/articles/archives/2010/oct/14/hard-truth-about-foreign-legion/] afterall. Heck, we could even deal with the refugee crisis by offering visas to those mercenaries. Sure as hell would be more popular than selling visas and citizenship [http://www.ibtimes.com.au/australia-evaluates-proposed-50000-entry-fee-immigrants-gain-citizenship-1445559] cause people always get antsy about inequality and having less downward social comparisons.
2lukeprog7yPassage from Patterson's Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market [http://www.amazon.com/Dark-Pools-I-trading-machines-ebook/dp/B006OFHLG6/]: But it proved all too easy: The very first tape Wang played revealed two dealers fixing prices.
2lukeprog7ySome relevant quotes from Schlosser's Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety [http://www.amazon.com/Command-and-Control-ebook/dp/B00C5R7F8G/]: And:
3lukeprog7yMore from Command and Control: And:
1lukeprog7yMore (#3) from Command and Control: And: And: And:
1lukeprog7yMore (#2) from Command and Control: And: And:
0lukeprog7yMore (#4) from Command and Control: And:
2shminux7yDo you keep a list of the audiobooks you liked anywhere? I'd love to take a peek.

Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.


Worthwhile if you care about the subject matter:

  • Singer, Wired for War (my clips)
  • Feinstein, The Shadow World (my clips)
  • Venter, Life at the Speed of Light (my clips)
  • Rhodes, Arsenals of Folly (my clips)
  • Weiner, Enemies: A History of the FBI (my clips)
  • Rhodes, The Making of the Atomic Bomb (available here) (my clips)
  • Gleick, Chaos (my clips)
  • Wiener, Legacy of Ashes: The History of the CIA (my clips)
  • Freese, Coal: A Human History (my clips)
  • Aid, The Secret Sentry (my clips)
  • Scahill, Dirty Wars (my clips)
  • Patterson, Dark Pools (my clips)
  • Lieberman, The Story of the Human Body
  • Pentland, Social Physics (my clips)
  • Okasha, Philosophy of Science: VSI
  • Mazzetti, The Way of the Knife
... (read more)

A process for turning ebooks into audiobooks for personal use, at least on Mac:

  1. Rip the Kindle ebook to non-DRMed .epub with Calibre and Apprentice Alf.
  2. Open the .epub in Sigil, merge all the contained HTML files into a single HTML file (select the files, right-click, Merge). Open the Source view for the big HTML file.
  3. Edit the source so that the ebook begins with the title and author, then jumps right into the foreword or preface or first chapter, and ends with the end of the last chapter or epilogue. (Cut out any table of contents, list of figures, list of tables, appendices, index, bibliography, and endnotes.)
  4. Remove footnotes if easy to do so, using Sigil's Regex find-and-replace (remember to use Minimal Match so you don't delete too much!). Click through several instances of the Find command to make sure it's going to properly cut out only the footnotes, before you click "Replace All."
  5. (Ignore italics here; it's added erroneously by LW.) Use find and replace to add [[slnc_1000]] at the end of every paragraph; Mac's text-to-speech engine interprets this as a slight pause, which aids in comprehension when I'm listening to the audiobook. Usually this
... (read more)
2Dr_Manhattan7yVoiceDream for iPhone does a very fine job of text-to-speech; it also syncs your pocket bookmarks and can read epub files.
3lukeprog7yOther: * Roose, Young Money. Too focused on a few individuals for my taste, but still has some interesting content. (my clips [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/alur] ) * Hofstadter & Sander, Surfaces and Essences. Probably a fine book, but I was only interested enough to read the first and last chapters. * Taleb, AntiFragile. Learned some from it, but it's kinda wrong much of the time. (my clips [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/a1ro] ) * Acemoglu & Robinson, Why Nations Fail. Lots of handy examples, but too much of "our simple theory explains everything." (my clips [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/a4pj] ) * Byrne, The Many Worlds of Hugh Everett III (available here [https://dl.dropboxusercontent.com/u/163098/Byrne%20-%20The%20Many%20Worlds%20of%20Hugh%20Everett%20III%20%28text-to-speech%20audiobook%29.m4a] ). Gave up on it; too much theory, not enough story. (my clips [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/a3f3] ) * Drexler, Radical Abundance. Gave up on it; too sanitized and basic. * Mukherjee, The Emperor of All Maladies. Gave up on it; too slow in pace and flowery in language for me. * Fukuyama, The Origins of Political Order. Gave up on it; the author is more keen on name-dropping theorists than on tracking down data. * Friedman, The Moral Consequences of Economic Growth (available here [https://dl.dropboxusercontent.com/u/163098/Friedman%20-%20The%20Moral%20Consequences%20of%20Economic%20Growth%20%28audiobook%2C%20text-to-speech%29.m4a] ). Gave up on it. There are some actual data in chs. 5-7, but the argument is too weak and unclear for my taste. * Tuchman, The Proud Tower. Gave up on it after a couple chapters. Nothing wrong with it, it just wasn't dense enough in the kind of learning I'm trying to do. * Foer
2shminux7yThanks! Your first 3 are not my cup of tea, but I'll keep looking through the top 1000 list. For now, I am listening to MaddAddam, the last part of Margaret Atwood's post-apocalyptic fantasy trilogy, which qrnyf jvgu bar zna qvfnccbvagrq jvgu uvf pbagrzcbenel fbpvrgl ervairagvat naq ercbchyngvat gur rnegu jvgu orggre crbcyr ur qrfvtarq uvzfrys. She also has some very good non-fiction, like her Massey lecture on debt [http://www.amazon.com/Payback-Shadow-Wealth-Massey-Lectures/dp/0660198304/ref=tmm_abk_swatch_0?_encoding=UTF8&sr=&qid=] , which I warmly recommend.
0Nick_Beckstead7yCould you say a bit about your audiobook selection process?
1lukeprog7yWhen I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn't make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to. These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible. Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/a3q5] to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it's harder for my brain to stay engaged as I'm listening, especially when I'm tired. I might give up on that process, I'm not sure. Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
2ozziegooen7yI definitely found out something similar. I've come to believe that most 'popular science', 'popular history' etc books are on audible, but almost anything with equations or code is not. The 'great courses' have been quite fantastic for me for learning about the social sciences. I found out about those recently. Occasionally I try podcasts for very niche topics (recent Rails updates, for instance), but have found them to be rather uninteresting in comparison to full books and courses.
1lukeprog7yFrom Singer's Wired for War:
1lukeprog7yMore (#7) from Wired for War: And:
0[anonymous]5yThe army recruiters say that soldiers on the ground still win wars. I reckon that Douhet's prediction will approach true, however, crudely. Drones.
0lukeprog7yMore (#6) from Wired for War: And:
-3[anonymous]5yInequality doesn't seem so bad now, huh?
0lukeprog7yMore (#5) from Wired for War:
0lukeprog7yMore (#4) from Wired for War: And:
0lukeprog7yMore (#3) from Wired for War: And: And:
0lukeprog7yMore (#2) from Wired for War:
0lukeprog7yMore (#1) from Wired for War: And:
0lukeprog7yFrom Osnos' Age of Ambition: And: And:
0lukeprog7yMore (#2) from Osnos' Age of Ambition: And:
0lukeprog7yMore (#1) from Osnos' Age of Ambition: And: And: And:
0lukeprog7yFrom Soldiers of Reason:
0lukeprog7yMore (#2) from Soldiers of Reason: And:
0lukeprog7yMore (#1) from Soldiers of Reason: And:
0lukeprog7yFrom David and Goliath: And:
0lukeprog7yMore (#2) from David and Goliath: And:
0lukeprog7yFrom Wade's A Troublesome Inheritance:
0lukeprog7yMore (#2) from A Troubled Inheritance:
0lukeprog7yMore (#1) from A Troublesome Inheritance: And:
0lukeprog7yFrom Moral Mazes: And: And:
0lukeprog7yFrom Lewis' The New New Thing: And:
0lukeprog7yFrom Dartnell's The Knowledge: And: And: And:
0lukeprog7yFrom Ayres' Super Crunchers, speaking of Epagogix, which uses neural nets to predict a movie's box office performance from its screenplay:
0lukeprog7yMore (#1) from Super Crunchers: And: And:
0lukeprog7yFrom Isaacson's Steve Jobs [http://www.amazon.com/Steve-Jobs-Exclusive-Walter-Isaacson-ebook/dp/B004W2UBYW/] : And: And: And:
0lukeprog7yMore (#1) from Steve Jobs: And: [no more clips, because Audible somehow lost all my bookmarks for the last two parts of the audiobook!]
0lukeprog7yFrom Feinstein's The Shadow World [http://www.amazon.com/Shadow-World-Inside-Global-Trade-ebook/dp/B004ULP8X4/]:
0lukeprog7yMore (#8) from The Shadow World: And: And:
0lukeprog7yMore (#7) from The Shadow World: And: And: And: And:
0lukeprog7yMore (#6) from The Shadow World: And: And:
0lukeprog7yMore (#5) from The Shadow World: And: And: And:
0lukeprog7yMore (#4) from The Shadow World: And: And:
0lukeprog7yMore (#3) from The Shadow World: And: And:
0lukeprog7yMore (#2) from The Shadow World: And:
0lukeprog7yMore (#1) from The Shadow World: And: And: And:
0lukeprog7yFrom Weiner's Enemies [http://www.amazon.com/Enemies-History-FBI-Tim-Weiner-ebook/dp/B006HUIZZO/]:
0lukeprog7yMore (#5) from Enemies: And:
0lukeprog7yMore (#4) from Enemies: And: And:
0lukeprog7yMore (#3) from Enemies: And: And:
0lukeprog7yMore (#2) from Enemies: And: And:
0lukeprog7yMore (#1) from Enemies: And: And:
0lukeprog7yFrom Roose's Young Money [http://www.amazon.com/Young-Money-Streets-Post-Crash-Recruits-ebook/dp/B00CO7GH54/] :
0lukeprog7yFrom Tetlock's Expert Political Judgment:
0lukeprog7yMore (#2) from Expert Political Judgment:
0lukeprog7yMore (#1) from Expert Political Judgment: And: And:
0lukeprog7yFrom Sabin's The Bet [http://www.amazon.com/Bet-Paul-Sabin-ebook/dp/B00E64EGZQ/] : And:
0lukeprog7yMore (#3) from The Bet:
0lukeprog7yMore (#2) from The Bet: And: And:
0lukeprog7yMore (#1) from The Bet: And: And:
0lukeprog7yFrom Yergin's The Quest [http://www.amazon.com/Quest-Energy-Security-Remaking-Modern-ebook/dp/B005JE2LN6/] :
0lukeprog7yMore (#7) from The Quest:
0lukeprog7yMore (#6) from The Quest: And: And: And: And:
0lukeprog7yMore (#5) from The Quest: And: And: And:
0lukeprog7yMore (#4) from The Quest: And:
0lukeprog7yMore (#3) from The Quest: And:
0lukeprog7yMore (#2) from The Quest: And: And: And:
0lukeprog7yMore (#1) from The Quest: And: And:
0lukeprog7yFrom The Second Machine Age [http://www.amazon.com/Second-Machine-Age-Prosperity-Technologies-ebook/dp/B00D97HPQI/] :
0lukeprog7yMore (#1) from The Second Machine Age:
0lukeprog7yFrom Making Modern Science [http://www.amazon.com/Making-Modern-Science-Historical-Survey-ebook/dp/B004GJXBKM/] :
0lukeprog7yMore (#1) from Making Modern Science:
0lukeprog7yFrom Johnson's Where Good Ideas Come From [http://www.amazon.com/Where-Good-Ideas-Come-Innovation-ebook/dp/B003ZK58TA/]:
0lukeprog7yFrom Gertner's The Idea Factory [http://www.amazon.com/Idea-Factory-Great-American-Innovation-ebook/dp/B005GSZIWG/] :
0lukeprog7yMore (#2) from The Idea Factory: And: And:
0lukeprog7yMore (#1) from The Idea Factory: And:
0somervta7yI'm sure that I've seen your answer to this question somewhere before, but I can't recall where: Of the audiobooks that you've listened to, which have been most worthwhile?
0lukeprog7yI keep an updated list here [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/9zkx] .
0lukeprog7yI guess I might as well post quotes from (non-audio) books here as well, when I have no better place to put them. First up is Revolution in Science [http://www.amazon.com/Revolution-Science-I-Bernard-Cohen/dp/0674767772/]. Starting on page 45:
0shminux7yThis amazingly high percentage of self-proclaimed revolutionary scientists (30% or more) seems like a result of selection bias, since most scientist with oversized egos are not even remembered. I wonder what fraction of actual scientists (not your garden-variety crackpots) insist on having produced a revolution in science.
0lukeprog7yFrom Sunstein's Worst-Case Scenarios [http://www.amazon.com/Worst-Case-Scenarios-Cass-R-Sunstein-ebook/dp/B001GS6ZMW/] :
2lukeprog7yMore (#2) from Worst-Case Scenarios:
0lukeprog7yMore (#5) from Worst-Case Scenarios:
0lukeprog7yMore (#4) from Worst-Case Scenarios:
0lukeprog7yMore (#3) from Worst-Case Scenarios: And: Similar issues are raised by the continuing debate over whether certain antidepressants impose a (small) risk of breast cancer. A precautionary approach might seem to argue against the use of these drugs because of their carcinogenic potential. But the failure to use those antidepressants might well impose risks of its own, certainly psychological and possibly even physical (because psychological ailments are sometimes associated with physical ones as well). Or consider the decision by the Soviet Union to evacuate and relocate more than 270,000 people in response to the risk of adverse effects from the Chernobyl fallout. It is hardly clear that on balance this massive relocation project was justified on health grounds: "A comparison ought to have been made between the psychological and medical burdens of this measure (anxiety, psychosomatic diseases, depression and suicides) and the harm that may have been prevented." More generally, a sensible government might want to ignore the small risks associated with low levels of radiation, on the ground that precautionary responses are likely to cause fear that outweighs any health benefits from those responses - and fear is not good for your health. And:
0lukeprog7yMore (#1) from Worst-Case Scenarios: But at least so far in the book, Sunstein doesn't mention the obvious rejoinder about investing now to prevent existential catastrophe. Anyway, another quote:
0lukeprog7yFrom Gleick's Chaos [http://www.amazon.com/Chaos-Making-Science-James-Gleick/dp/0143113453/]:
0lukeprog7yMore (#3) from Chaos: And:
0lukeprog7yMore (#2) from Chaos: And: And:
0lukeprog7yMore (#1) from Chaos:
0lukeprog7yFrom Lewis' The Big Short [http://www.amazon.com/Big-Short-Inside-Doomsday-Machine-ebook/dp/B003LSTK8G/]:
0lukeprog7yMore (#4) from The Big Short: And: And: And:
0lukeprog7yMore (#3) from The Big Short: And: And: And:
0lukeprog7yMore (#2) from The Big Short: And: And:
0lukeprog7yMore (#1) from The Big Short: And:
0lukeprog7yFrom Gleick's The Information [http://www.amazon.com/Information-History-Theory-Flood-ebook/dp/B004DEPHUC/]:
2lukeprog7yMore (#1) from The Information: And: And: And, an amusing quote:
0lukeprog7yFrom Acemoglu & Robinson's Why Nations Fail [http://www.amazon.com/Why-Nations-Fail-Origins-Prosperity-ebook/dp/B0058Z4NR8/] :
0lukeprog7yMore (#2) from Why Nations Fail: And:
0lukeprog7yMore (#1) from Why Nations Fail: And: And: And:
0lukeprog7yFrom Greenblatt's The Swerve: How the World Became Modern [http://www.amazon.com/The-Swerve-World-Became-Modern/dp/0393343405/]:
2lukeprog7yMore (#1) from The Swerve:
0lukeprog7yFrom Aid's The Secret Sentry [http://www.amazon.com/Secret-Sentry-Matthew-M-Aid-ebook/dp/B002WOD8X8/]:
0lukeprog7yMore (#6) from The Secret Sentry: And: And: And:
0lukeprog7yMore (#5) from The Secret Sentry: And:
0lukeprog7yMore (#4) from The Secret Sentry: And:
0lukeprog7yMore (#3) from The Secret Sentry: And: And: Even when enemy troops and tanks overran the major South Vietnamese military base at Bien Hoa, outside Saigon, on April 26, Martin still refused to accept that Saigon was doomed. On April 28, Glenn met with the ambassador carry ing a message from Allen ordering Glenn to pack up his equipment and evacuate his remaining staff immediately. Martin refused to allow this. The following morning, the military airfield at Tan Son Nhut fell, cutting off the last air link to the outside.
0lukeprog7yMore (#2) from The Secret Sentry: And: And:
0lukeprog7yMore (#1) from The Secret Sentry:
0lukeprog7yFrom Mazzetti's The Way of the Knife:
0lukeprog7yMore (#5) from The Way of the Knife: And: And:
0lukeprog7yMore (#4) from The Way of the Knife: And: And: And: And: And: And:
0lukeprog7yMore (#3) from The Way of the Knife:
0lukeprog7yMore (#2) from The Way of the Knife: And:
0lukeprog7yMore (#1) from The Way of the Knife: And: And:
0lukeprog7yFrom Freese's Coal: A Human History [http://www.amazon.com/Coal-Human-History-Barbara-Freese-ebook/dp/B001GXQN5Q/]:
0lukeprog7yMore (#2) from Coal: A Human History:
0lukeprog7yMore (#1) from Coal: A Human History:
0lukeprog7yPassages from The Many Worlds of Hugh Everett III [http://www.amazon.com/Many-Worlds-Hugh-Everett-III-ebook/dp/B00BEW2QO6/]: And: (It wasn't until decades later that David Deutsch and others showed that Everettian quantum mechanics does make novel experimental predictions.)
0lukeprog7yA passage from Tim Weiner's Legacy of Ashes: The History of the CIA [http://www.amazon.com/Legacy-Ashes-History-CIA-ebook/dp/B0010SIPZ8/]:
0lukeprog7yMore (#1) from Legacy of Ashes: And: And: And:
0lukeprog7yI shared one quote here [http://lesswrong.com/lw/iwf/open_thread_october_27_31_2013/9yxl]. More from Life at the Speed of Light [http://www.amazon.com/Life-Speed-Light-Double-Digital-ebook/dp/B00C1N5WRK/]:
0lukeprog7yAlso from Life at the Speed of Light:

Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.

This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.

RSI capabilities could be charted, and are likely to be AI-complete.

This is to be taken as an arguendo, not as the author's opinion, right? See IEM on the minimal conditions for takeoff. Albeit if &q... (read more)

6Benya8yClimate change doesn't have the aspect that "if this ends up being a problem at all, then chances are that I (or my family/...) will die of it". (Agree with the rest of the comment.)
2Eliezer Yudkowsky8yMany people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
0Benya8yHm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he's taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven't read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update? Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today's situation with climate change if that happened... Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
7Eliezer Yudkowsky8yWill keep an eye out for the next citation. This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles. People only avoid certain sorts of death risks under certain circumstances.
4Benya8yThanks! Point. Need to think.
3Eugine_Nier8yBeing told something is dangerous =/= believing it is =/= alieving it is.
2lukeprog8yRight. I'll clarify in the OP.
1[anonymous]8yThis seems implied by X-complete. X-complete generally means "given a solution to an X-complete problem, we have a solution for X". eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time. (Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)

(I don't have answers to your specific questions, but here are some thoughts about the general problem.)

I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven't thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".

That said, there are some steps in... (read more)

I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:

  1. I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.

  2. AI risk is a Global Catastrophic Risk i

... (read more)

The people with the most power tend to be the most rational people


7JonahS8yRationality is systematized winning [http://wiki.lesswrong.com/wiki/Rationality_is_systematized_winning]. Chance plays a role, but over time it's playing less and less of a role, because of more efficient markets.
9Decius8yThere is lots of evidence that people in power are the most rational, but there is a huger prior to overcome. Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don't think that very rational people are common and I think that they are less likely to want more power than they have. Particularly since the previous generation of power-holders used different factors when they selected their successors.
2JonahS8yI agree with all of this. I think that "people in power are the most rational" was much less true in 1950 than it is today, and that it will be much more true in 2050.
6elharo8yActually that's a badly titled article. At best "Rationality is systematized winning" applies to instrumental, not epistemic, rationality. And even for that you can't make rationality into systematized winning by defining it so. Either that's a tautology (whatever systematized winning is, we define that as "rationality") or it's an empirical question. I.e. does rationality lead to winning? Looking around the world at "winners", that seems like a very open question. And now that I think about it, it's also an empirical question whether there even is a system for winning. I suspect there is--that is, I suspect that there are certain instrumental practices one can adopt that are generically useful for achieving a broad variety of life goals--but this too is an empirical question we should not simply assume the answer to.
1JonahS8yI agree that my claim isn't obvious. I'll try to get back to you with detailed evidence and arguments.
5ChrisHallquist7yThe problem is that politicians have a lot to gain from really believing the stupid things they have to say to gain and hold power. To quote an old thread [http://lesswrong.com/lw/31i/have_no_heroes_and_no_villains/]: Cf. Stephen Pinker historians who've studied Hitler tend to come away convinced he really believed he was a good guy. To get the fancy explanation of why this is the case, see "Trivers' Theory of Self-Deception." [http://lesswrong.com/lw/6mj/trivers_on_selfdeception/]
6lukeprog8yIt's not much evidence, but the two earliest scientific investigations of existential risk I know of, LA-602 [http://www.sciencemadness.org/lanl1_a/lib-www/la-pubs/00329010.pdf] and the RHIC Review [http://www.bnl.gov/RHIC/docs/rhicreport.pdf], seem to show movement in the opposite direction [http://lesswrong.com/lw/rg/la602_vs_rhic_review/]: "LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations." Perhaps the trend you describe is accurate, but I also wouldn't be surprised to find out (after further investigation) that scientists are now increasingly likely to avoid serious analysis of real risks posed by their research, since they're more worried than ever before about funding for their field (or, for some other reason). The AAAI Presidential Panel on Long-Term AI Futures [http://research.microsoft.com/en-us/um/people/horvitz/panel_chairs_ovw.pdf] was pretty disappointing, and like the RHIC Review seems like pure public relations, with a pre-determined conclusion and no serious risk analysis.
3ryjm8yWhy would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh. This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial). I think you're just blurring "rationality" here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say), and especially of the kind needed to properly handle AI - and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don't give two shits about AI risk - if they don't think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren't thinking about it now - why are you confident this won't be the case in the future? Thinking about AI requires a rather large conceptual leap - "rationality" is necessary but not sufficient, so even if all powerful people were "rational" it doesn't follow that they can deal with these issues properly or even single them out as somethin
1JonahS8yThanks for engaging. The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well. I think that people will understand what makes AI dangerous. The arguments aren't difficult to understand. Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean "rational with respect to being able to run a country," which is relevant), and I expect this trend to continue. Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power. Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long. I agree that AI safety requires a substantial shift in perspective — what I'm claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent. You don't need "most people" to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn't the most prestigious field. If political leaders are sufficiently rational (as I expect them to be), they'll give research grants and prestige to people who work on AI safety.
3wubbles8yThings were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950's. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
0JonahS8yI agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
3Baughn8yWorld war three seems certain to significantly decrease human population. From my point of view, I can't eliminate anthropic reasoning for why there wasn't such a war before I was born.
2Desrtopa8yWe still get people occasionally who argue the point while reading through the Sequences, and that's a heavily filtered audience to begin with.
3JonahS8yThere's a difference between "sufficiently difficult so that a few readers of one person's exposition can't follow it" and "sufficiently difficult so that after being in the public domain for 30 years, the arguments won't have been distilled so as to be accessible to policy makers." I don't think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I'd concede that this is not immediately obvious.
1hairyfigment8yAnd I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war [http://en.wikipedia.org/wiki/Dogfight#Korean_War] and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them). This in fact is part of why I don't think we 'survived' through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information. This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
0JonahS8yAs I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I'm not saying "the fact that there haven't been nuclear exchanges means that destructive things can't happen." I was using the nuclear war thing as one of many [http://lesswrong.com/lw/hmb/many_weak_arguments_vs_one_relatively_strong/] outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.
-3FeepingCreature8yIt may be challenging to estimate the "actual, at the time" probability of a past event that would quite possibly have resulted in you not existing. Survivor bias may play a role here.
1JonahS8yNuclear war would have to be really, really big to kill a majority of the population, and probably even if all weapons were used the fatality rate would be under 50% (with the uncertainty coming from nuclear winter [http://en.wikipedia.org/wiki/Nuclear_winter]). Note that most residents of Hiroshima and Nagasaki survived the 1945 bombings [http://www.aasc.ucla.edu/cab/200708230009.html], and that fewer than 60% of people live in cities [http://www.who.int/gho/urban_health/situation_trends/urban_population_growth_text/en/] .
0elharo8yIt depends on the nuclear war. An exchange of bombs between India and Pakistan probably wouldn't end human life on the planet. However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have. Fortunately that doesn't seem to be a big risk right now. 30 years ago it was. I don't feel confident in any predictions one way or the other about whether this might be a threat again 30 years from now.
7JonahS8yWhy do you think this?
0elharo8yBecause all the evidence I've read or heard (most of it back in the 1980s) agreed on this. Specifically in a likely exchange between the U.S. and the USSR the northern, hemisphere would have been rendered completely uninhabitable within days. Humanity in the southern hemisphere would probably have lasted somewhat longer, but still would have been destroyed by nuclear winter and radiation. Details depend on the exact distribution of targets. Remember Hiroshima and Nagasaki were 2 relatively small fission weapons. By the 1980s the USSR and the US each had enough much bigger fusion bombs to individually destroy the planet. The only question was how many each would use in an exchange and where they target them.
4JonahS8yThis is mostly out of line with what I've read. Do you have references?
0FeepingCreature8yI'm not sure what the correct way to approach this would be. I think it may be something like comparing the number of people in your immediate reference class - depending on preference, this could be "yourself precisely" or "everybody who would make or have made the same observation as you" - and then ask "how would nuclear war affect the distribution of such people in that alternate outcome". But that's only if you give each person uniform weighting of course, which has problems of its own.
1JonahS8ySure, these things are subtle — my point was that the numbers who would have perished isn't very large in this case, so that under a broad class of assumptions, one shouldn't take the observed absence of nuclear conflict to be a result of survivorship bias.
[-][anonymous]8y 6

The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.

Don't trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.


-2Eugine_Nier8yI agree. Of course the article you linked to ultimately attempts to argue for trusting semi-competent world leaders.
0[anonymous]8yIt alludes to such an argument and sympathizes with it. Note I also "made the argument" that civilization should be dismantled. Personally I favor the FAI solution, but I tried to make the post solution-agnostic and mostly demonstrate where those arguments are coming from, rather than argue any particular one. I could have made that clearer, I guess. Thanks for the feedback.

I think there's a >15% chance AI will not be preceded by visible signals.

Aren't we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.

2jsalvatier8yI interpreted that as 'visible signals of danger', but I could be wrong.

Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.

Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.

Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.

These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.

One question is whether AI is like CFCs, or like CO2, or like hacking.

With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.

With CO2, the solution is equally simple: cap and trade. It's just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we're left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because ... (read more)

Here are my reasons for pessimism:

  1. There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull el

... (read more)

Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.

0wedrifid8yPerhaps someone could convince congress that "Terrorists" had developed "geomagnetic weaponry" and new "geomagnetic defence systems" need to be implemented urgently. (Being seen to be) taking action to defend against the hated enemy tends to be more motivating than worrying about actual significant risks.

Even if one organization navigates the creation of friendly AI successfully, won't we still have to worry about preventing anyone from ever creating an unsafe AI?

Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.

8Manfred8yIf the friendly AI comes first, the goal is for it to always have enough resources to be able to stop unsafe AIs from being a big risk.
2Benya8yUpvoted, but "always" is a big word. I think the hope is more for "as long as it takes until humanity starts being capable of handling its shit itself"...
3Benya8yWhy the downvotes? Do people feel that "the FAI should at some point fold up and vanish out of existence" is so obvious that it's not worth pointing out? Or disagree that the FAI should in fact do that? Or feel that it's wrong to point this out in the context of Manfred's comment? (I didn't mean to suggest that Manfred disagrees with this, but felt that his comment was giving the wrong impression.)
5Pentashagon8yWill sentient, self-interested agents ever be free from the existential risks of UFAI/intelligence amplification without some form of oversight? It's nice to think that humanity will grow up and learn how to get along, but even if that's true for 99.9999999% of humans that leaves 7 people from today's population who would probably have the power to trigger their own UFAI hard takeoff after a FAI fixes the world and then disappears. Even if such a disaster could be stopped it is a risk probably worth the cost of keeping some form of FAI around indefinitely. What FAI becomes is anyone's guess but the need for what FAI does will probably not go away. If we can't trust humans to do FAI's job now, I don't think we can trust humanity's descendents to do FAI's job either, just from Loeb's theorem. I think it is unlikely that humans will become enough like FAI to properly do FAI's job. They would essentially give up their humanity in the process.
3Eliezer Yudkowsky8yA secure operating system for governed matter doesn't need to take the form of a powerful optimization process, nor does verification of transparent agents trusted to run at root level. Benja's hope seems reasonable to me.
6Wei_Dai8yThis seems non-obvious. (So I'm surprised to see you state it as if it was obvious. Unless you already wrote about the idea somewhere else and are expecting people to pick up the reference?) If we want the "secure OS" to stop posthumans from running private hell simulations, it has to determine what constitutes a hell simulation and successfully detect all such attempts despite superintelligent efforts at obscuration. How does it do that without being superintelligent itself? This sounds interesting but I'm not sure what it means. Can you elaborate?
4Eliezer Yudkowsky8yHm, that's true. Okay, you do need enough intelligence in the OS to detect certain types of simulations / and/or the intention to build such simulations, however obscured. If you can verify an agent's goals (and competence at self-modification), you might be able to trust zillions of different such agents to all run at root level, depending on what the tiny failure probability worked out to quantitatively.
0Pentashagon8yThat means each non-trivial agent would become the FAI for its own resources. To see the necessity of this imagine what initial verification would be required to allow an agent to simulate its own agents. Restricted agents may not need a full FAI if they are proven to avoid simulating non-restricted agents, but any agent approaching the complexity of humans would need the full FAI "conscience" running to evaluate its actions and interfere if necessary. EDIT: "interfere" is probably the wrong word. From the inside the agent would want to satisfy the FAI goals in addition to its own. I'm confused about how to talk about the difference between what an agent would want and what an FAI would want for all agents, and how it would feel from the inside to have both sets of goals.
1Benya8yI'd hope so, since I think I got the idea from you :-) This is tangential to what this thread is about, but I'd add that I think it's reasonable to have hope that humanity will grow up enough that we can collectively make reasonable decisions about things affecting our then-still-far-distant future. To put it bluntly, if we had an FAI right now I don't think it should be putting a question like "how high is the priority of sending out seed ships to other galaxies ASAP" to a popular vote, but I do think there's reasonable hope that humanity will be able to make that sort of decision for itself eventually. I suppose this is down to definitions, but I tend to visualize FAI as something that is trying to steer the future of humanity; if humanity eventually takes on the responsibility for this itself, then even if for whatever reason it decides to use a powerful optimization process for the special purpose of preventing people from building uFAI, it seems unhelpful to me to gloss this without more qualification as "the friendly AI [... will always ...] stop unsafe AIs from being a big risk", because the latter just sounds to me like we're keeping around the part where it steers the fate of humanity as well.
0Benya8yThanks for explaning the reasoning! I do agree that it seems quite likely that even in the long run, we may not want to modify ourselves so that we are perfectly dependable, because it seems like that would mean getting rid of traits we want to keep around. That said, I agree with Eliezer's reply [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/94e1] about why this doesn't mean we need to keep an FAI around forever; see also my comment here [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/94e9] . I don't think Löb's theorem enters into it. For example, though I agree that it's unlikely that we'd want to do so, I don't believe Löb's theorem would be an obstacle to modifying humans in a way making them super-dependable.

The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."

What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?

If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?

0timtyler8yWe see pretty big boosts already, IMO - largely by facilitating networking effects. Idea recombination and testing happen faster on the internet.
[-][anonymous]5y 0

@Lukeprog, can you

(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI's as an organisation's confidence in each of the 3:

Elites often fail to take effective action despite plenty of warning.

I think there's a >10% chance AI will not be preceded by visible signals.

I think the elites' safety measures will likely be insufficient.

Thank you for your diligence.

There's another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it's a threat in the first place.

Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.

[-][anonymous]8y -3

Combining the beginning and the end of your questions reveals an answer.

Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of [nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars] just fine?

Answer how just fine any of these are any you have analogous answers.

You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.