LESSWRONG
LW

763
Wikitags

Human Alignment

Edited by Jordan Arel last updated 6th Dec 2022

Human alignment is a state of humanity in which most or all of humanity systematically cooperates to achieve positive-sum outcomes for everyone (or at a minimum are prevented from pursuing negative sum outcomes), in a way perpetually sustainable into the future. Such a state of human alignment may be necessary to prevent an existential catastrophe in the case that the "Vulnerable World Hypothesis" is correct.

Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged Human Alignment
213. Uploading
Ω
RogerDearnaley
2y
Ω
5
15What's the best way to streamline two-party sale negotiations between real humans?
Q
Isaac King
2y
Q
21
12Notes on Righteousness and Megalopsychia
David Gross
4mo
0
29Paradigm-building from first principles: Effective altruism, AGI, and alignment
Cameron Berg
4y
5
19How to Promote More Productive Dialogue Outside of LessWrong
sweenesm
2y
4
18What can we learn from parent-child-alignment for AI?
Karl von Wendt
2d
4
16Great Empathy and Great Response Ability
positivesum
2y
0
15How "Pinky Promise" diplomacy once stopped a war in the Middle East
positivesum
2y
9
15Arusha Perpetual Chicken—an unlikely iterated game
James Stephen Brown
7mo
1
13Can you care without feeling?
Priyanka Bharadwaj
5mo
2
6Love, Lies and Misalignment
Priyanka Bharadwaj
3mo
1
5Tetherware #1: The case for humanlike AI with free will
Jáchym Fibír
9mo
14
4AI Safety in a Vulnerable World: Requesting Feedback on Preliminary Thoughts
Jordan Arel
3y
2
3How I think about alignment and ethics as a cooperation protocol software
Burny
1mo
0
3​​ Open-ended/Phenomenal ​Ethics ​(TLDR)
Ryo
2y
0
Load More (15/20)
Add Posts