This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
LW
Login
Home
Concepts
Library
Rationality: A-Z
The Codex
HPMOR
Best Of
Community Events
Less Wrong Community Weekend 2022
[Today]
Freiburg - Sequences & TAP Introduction
Rationalist Meetup - Lyngby Denmark 2022-07-03
Freiburg - Sequences & TAP Result Discussion
All Posts
Subscribe (RSS/Email)
Open Questions
Contact Us
About
FAQ
Donate
Top Questions
49
Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment?
Q
P.
,
TekhneMakre
23d
Q
46
61
What’s the contingency plan if we get AGI tomorrow?
Q
Yitz
,
Quintin Pope
8d
Q
24
130
Forecasting Thread: AI Timelines
Q
Ω
Amandango
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
2y
Q
Ω
94
91
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
21d
Q
52
60
Convince me that humanity *isn’t* doomed by AGI
Q
Yitz
,
mukashi
3mo
Q
53
Recent Activity
1
AGI alignment with what?
Q
AlignmentMirror
36m
Q
0
2
What is the contrast to counterfactual reasoning?
Q
Dominic Roser
3h
Q
1
2
Cryonics-adjacent question
Q
Flaglandbase
12h
Q
1
11
How to Navigate Evaluating Politicized Research?
Q
Davis_Kingsley
5h
Q
1
4
What's the goal in life?
Q
Konstantin Weitz
13d
Q
6
13
How would public media outlets need to be governed to cover all political views?
Q
ChristianKl
2mo
Q
14
5
How should I talk about optimal but not subgame-optimal play?
Q
JamesFaville
21h
Q
1
33
Are long-form dating profiles productive?
Q
AABoyles
,
Lukas_Gloor
4d
Q
29
8
What is the LessWrong Logo(?) Supposed to Represent?
Q
DragonGod
3d
Q
6
6
Correcting human error vs doing exactly what you're told - is there literature on this in context of general system design?
Q
Jan Czechowski
2d
Q
0
-15
Should any human enslave an AGI system?
Q
AlignmentMirror
6d
Q
44
36
What is the typical course of COVID-19? What are the variants?
Q
Elizabeth
2y
Q
29