1.) Educated newspaper readers,

2.) highschool graduates,

3.) journalists (writing for the politics part of the newspaper),

4.) your paraents,

5.) politicians?

New Answer
New Comment

1 Answers sorted by

  1. Gell-Mann Amnesia, Hanson on news, Elephant in the Brain, Credibility of the CDC on Covid, maybe some of Zvi's covid stuff, and eventually the Sequences. Forget AI Risk; if their epistemics are so spectacularly bad that they're reading the newspaper then we have other foundations to fix before we get to AI.
  2. Intro to AI class, and the math necessary for it. Also some intro physics, maybe an operations research class on optimization, at least an intro class on probability, and enough programming to get comfortable with it. They're not going to be able to do anything useful about AI risk if they don't know at least some of the basics.
  3. Utterly beyond hope. You'd be better off trying to explain AI Risk to new-agey homeopaths, at least they are simply wrong rather than outsourcing their epistemics to click metrics.
  4. The soviet nail factory story about Goodhart's Law, then just mention that AI is designed like that and we don't actually have a better solution. They're not going to do anything useful about it anyway, so just a very minimal 60-second explanation is fine, and they're both already the sort of people who will immediately recognize Goodhart's Law as a phenomenon.
  5. Politicians... mostly don't really perceive the parts of the world in which AI risk lives. AI risk is a physical-world phenomenon, not a status or signalling mechanism, except insofar as it's associated with the fairly niche rationalist/EA group. You're not really going to be able to communicate to a politician about AI risk itself, any more than you can communicate to a cow about radio signals. At best, you can maybe communicate to them that there is a Rich Interest Group which cares about something to do with the letter-string "AI risk", and that interest group may sometimes donate money if the politician sometimes says the words "AI risk" and promises to vote for some vague AI-regulation which nobody has figured out yet.