LESSWRONG
LW

«Boundaries» Sequence
Boundaries / Membranes [technical]Existential riskGame TheoryAICommunity
Frontpage

25

«Boundaries» Sequence (Index Post)

by Andrew_Critch
26th Jul 2022
1 min read
1

25

Boundaries / Membranes [technical]Existential riskGame TheoryAICommunity
Frontpage

25

Next:
«Boundaries», Part 1: a key missing concept from utility theory
33 comments158 karma
Log in to save where you left off
«Boundaries» Sequence (Index Post)
10Alex_Altair
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 10:57 AM
[-]Alex_Altair3y102

In case you missed it, LW 2.0 has feature support for creating sequences. If you hover over your username, the menu has a link to https://www.lesswrong.com/sequencesnew

Reply
Moderation Log
More from Andrew_Critch
View more
Curated and popular this week
1Comments

In an upcoming short sequence of posts, I'm going to try to describe a causal pathway from 

  1. a key missing idea in the utility-theoretic foundations of game theory, leading to
  2. some problems I think I see in effective altruism discourse, leading further to
  3. gaps in some approaches to AI alignment, and finally, 
  4. implications for existential risk.

By default, I'll write one post for each of the above points, since they have different epistemic statuses and can be debated separately.  Posts 1 and 3 will be somewhat technical and research-oriented, and cross-posted to the alignment forum, whereas 2 and 4 will be non-technical and community-oriented, and cross-posted to the EA forum.  After that there might be more posts in the sequence, depending on the ensuing conversation.  In any case I'll try to keep this index post updated with the full sequences.

Here goes!

ETA Oct 31, 2022: There is now a proper sequence index for this sequence :)

→ «Boundaries» Sequence