15

LESSWRONG
LW

14
Research AgendasAIWorld Optimization
Frontpage

60

AI Research Considerations for Human Existential Safety (ARCHES)

by habryka
9th Jul 2020
AI Alignment Forum
1 min read
7

60

Ω 24

This is a linkpost for https://arxiv.org/pdf/2006.04948.pdf

60

Ω 24

AI Research Considerations for Human Existential Safety (ARCHES)
5Ben Pace
16Rohin Shah
2Ben Pace
4David Scott Krueger (formerly: capybaralet)
2Ben Pace
1MaxRa
1FactorialCode
New Comment
7 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:05 PM
[-]Ben Pace5yΩ250

Wow, this is long, and seems pretty detailed and interesting. I'd love to see someone write a selection of key quotes or a summary.

Reply
[-]Rohin Shah5yΩ9160

Highlighted in AN #103 with a summary, though it didn't go into the research directions (because it would have become too long, and I thought the intro + categorization was more important on average).

Reply
[-]Ben Pace5yΩ120

Thank you!

Reply
[-]David Scott Krueger (formerly: capybaralet)5yΩ340

There is now also an interview with Critch here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/

Reply
[-]Ben Pace5yΩ120

I listened to this yesterday! Was quite interesting, I'm glad I listened to it.

Reply
[-]MaxRa5y10

Really enjoyed reading this. The section on "AI pollution" leading to a loss of control about the development of prepotent AI really interested me.

Avoiding [the risk of uncoordinated development of Misaligned Prepotent AI] calls for well-deliberated and respected assessments of the capabilities of publicly available algorithms and hardware, accounting for whether those capabilities have the potential to be combined to yield MPAI technology. Otherwise, the world could essentially accrue “AI-pollution” that might eventually precipitate or constitute MPAI.

  • I wonder how realistic it is to predict this e.g. would you basically need the knowledge to build it to have a good sense for that potential?
  • I also thought the idea of AI orgs dropping all their work once the potential for this concentrates in another org is relevant here - are there concrete plans when this happens?
  • Are there discussion about when AI orgs might want to stop publishing things? I only know of MIRI, but would they advise others like OpenAI or DeepMind to follow their example?
Reply
[-]FactorialCode5y10

Nitpick, is there a reason why the margins are so large?

Reply
Moderation Log
More from habryka
View more
Curated and popular this week
7Comments
Research AgendasAIWorld Optimization
Frontpage

Andrew Critch's (Academian) and David Krueger's review of 29 AI (existential) safety research directions, each with an illustrative analogy, examples of current work and potential synergies between research directions, and discussion of ways the research approach might lower (or raise) existential risk.

Mentioned in
86High Reliability Orgs, and AI Companies