LESSWRONG
LW

1420
Wikitags

Human Alignment

Edited by Jordan Arel last updated 6th Dec 2022

Human alignment is a state of humanity in which most or all of humanity systematically cooperates to achieve positive-sum outcomes for everyone (or at a minimum are prevented from pursuing negative sum outcomes), in a way perpetually sustainable into the future. Such a state of human alignment may be necessary to prevent an existential catastrophe in the case that the "Vulnerable World Hypothesis" is correct.

Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged Human Alignment
5
213. Uploading
Ω
RogerDearnaley
2y
Ω
5
2
15What's the best way to streamline two-party sale negotiations between real humans?
Q
Isaac King
3y
Q
21
2
12Notes on Righteousness and Megalopsychia
David Gross
4mo
0
1
29Paradigm-building from first principles: Effective altruism, AGI, and alignment
Cameron Berg
4y
5
1
19How to Promote More Productive Dialogue Outside of LessWrong
sweenesm
2y
4
1
16What can we learn from parent-child-alignment for AI?
Karl von Wendt
20d
4
1
16Great Empathy and Great Response Ability
positivesum
2y
0
1
15How "Pinky Promise" diplomacy once stopped a war in the Middle East
positivesum
2y
9
1
15Arusha Perpetual Chicken—an unlikely iterated game
James Stephen Brown
7mo
1
1
13Can you care without feeling?
Priyanka Bharadwaj
6mo
2
1
6Love, Lies and Misalignment
Priyanka Bharadwaj
3mo
1
1
5Tetherware #1: The case for humanlike AI with free will
Jáchym Fibír
10mo
14
1
4AI Safety in a Vulnerable World: Requesting Feedback on Preliminary Thoughts
Jordan Arel
3y
2
1
3How I think about alignment and ethics as a cooperation protocol software
Burny
2mo
0
1
3​​ Open-ended/Phenomenal ​Ethics ​(TLDR)
Ryo
2y
0
Load More (15/20)
Add Posts