LESSWRONG
LW

394
Wikitags

Human Alignment

Edited by Jordan Arel last updated 6th Dec 2022

Human alignment is a state of humanity in which most or all of humanity systematically cooperates to achieve positive-sum outcomes for everyone (or at a minimum are prevented from pursuing negative sum outcomes), in a way perpetually sustainable into the future. Such a state of human alignment may be necessary to prevent an existential catastrophe in the case that the "Vulnerable World Hypothesis" is correct.

Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged Human Alignment
213. Uploading
Ω
RogerDearnaley
2y
Ω
5
15What's the best way to streamline two-party sale negotiations between real humans?
Q
Isaac King
2y
Q
21
12Notes on Righteousness and Megalopsychia
David Gross
2mo
0
29Paradigm-building from first principles: Effective altruism, AGI, and alignment
Cameron Berg
4y
5
19How to Promote More Productive Dialogue Outside of LessWrong
sweenesm
2y
4
16Great Empathy and Great Response Ability
positivesum
2y
0
15How "Pinky Promise" diplomacy once stopped a war in the Middle East
positivesum
2y
9
15Arusha Perpetual Chicken—an unlikely iterated game
James Stephen Brown
5mo
1
13Can you care without feeling?
Priyanka Bharadwaj
4mo
2
6Love, Lies and Misalignment
Priyanka Bharadwaj
1mo
1
5Tetherware #1: The case for humanlike AI with free will
Jáchym Fibír
8mo
14
4AI Safety in a Vulnerable World: Requesting Feedback on Preliminary Thoughts
Jordan Arel
3y
2
3​​ Open-ended/Phenomenal ​Ethics ​(TLDR)
Ryo
2y
0
3How Microsoft's ruthless employee evaluation system annihilated team collaboration.
positivesum
2y
2
2The case for "Generous Tit for Tat" as the ultimate game theory strategy
positivesum
2y
3
Load More (15/18)
Add Posts