LESSWRONG
LW

HomeAll PostsConceptsLibrary
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Best Of
Community Events
Zuzalu
2023 Stanford Existential Risks Conference
[Today]3/26/23 March THOUGHT-GYM
[Today]Group Debugging
Subscribe (RSS/Email)
About
FAQ
HomeAll PostsConceptsLibraryCommunity

Top Questions

133Forecasting Thread: AI TimelinesQΩ
Amandango, Daniel Kokotajlo, Ben Pace, datscilly
3y
QΩ
96
46Challenge: Does ChatGPT ever claim that a bad outcome for humanity is actually good?Q
Yair Halberstadt
4d
Q
27
149Where do (did?) stable, cooperative institutions come from?Q
AnnaSalamon, Liron
2y
Q
76
65What happened to the OpenPhil OpenAI board seat?Q
ChristianKl
11d
Q
2
60What‘s in your list of unsolved problems in AI alignment?Q
jacquesthibs
19d
Q
6
Load MoreView All Top Questions

Recent Activity

133Forecasting Thread: AI TimelinesQΩ
Amandango, Daniel Kokotajlo, Ben Pace, datscilly
3y
QΩ
96
3How Politics interacts with AI ?Q
qbolec
6h
Q
1
10How to model uncertainty about preferences?Q
quetzal_rainbow
2d
Q
1
2Seeking Advice on Raising AI X-Risk Awareness on Social MediaQ
ViktorThink
2d
Q
1
26Alignment-related jobs outside of London/SFQ
Ariel Kwiatkowski
3d
Q
13
9What does the economy do?Q
tailcalled
2d
Q
16
-3Why Carl Jung is not popular in AI Alignment Research?Q
whitehatStoic
9d
Q
13
20Are we too confident about unaligned AGI killing off humanity?Q
RomanS
20d
Q
63
13Are robotics bottlenecked on hardware or software?Q
tailcalled
5d
Q
12
13Why not constrain wetlabs instead of AI?Q
Lone Pine
5d
Q
10
18Can independent researchers get a sponsored visa for the US or UK?Q
jacquesthibs
2d
Q
0
-41Genuine question: If Eliezer is so rational why is he fat?Q
DirichletConvolution, Raemon
4d
Q
6
Load MoreView All Questions