This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
LW
Login
Home
All Posts
Concepts
Library
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Best Of
Community Events
Zuzalu
2023 Stanford Existential Risks Conference
[Today]
3/26/23 March THOUGHT-GYM
[Today]
Group Debugging
Subscribe (RSS/Email)
About
FAQ
Top Questions
133
Forecasting Thread: AI Timelines
Q
Ω
Amandango
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
3y
Q
Ω
96
46
Challenge: Does ChatGPT ever claim that a bad outcome for humanity is actually good?
Q
Yair Halberstadt
4d
Q
27
149
Where do (did?) stable, cooperative institutions come from?
Q
AnnaSalamon
,
Liron
2y
Q
76
65
What happened to the OpenPhil OpenAI board seat?
Q
ChristianKl
11d
Q
2
60
What‘s in your list of unsolved problems in AI alignment?
Q
jacquesthibs
19d
Q
6
Recent Activity
133
Forecasting Thread: AI Timelines
Q
Ω
Amandango
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
3y
Q
Ω
96
3
How Politics interacts with AI ?
Q
qbolec
6h
Q
1
10
How to model uncertainty about preferences?
Q
quetzal_rainbow
2d
Q
1
2
Seeking Advice on Raising AI X-Risk Awareness on Social Media
Q
ViktorThink
2d
Q
1
26
Alignment-related jobs outside of London/SF
Q
Ariel Kwiatkowski
3d
Q
13
9
What does the economy do?
Q
tailcalled
2d
Q
16
-3
Why Carl Jung is not popular in AI Alignment Research?
Q
whitehatStoic
9d
Q
13
20
Are we too confident about unaligned AGI killing off humanity?
Q
RomanS
20d
Q
63
13
Are robotics bottlenecked on hardware or software?
Q
tailcalled
5d
Q
12
13
Why not constrain wetlabs instead of AI?
Q
Lone Pine
5d
Q
10
18
Can independent researchers get a sponsored visa for the US or UK?
Q
jacquesthibs
2d
Q
0
-41
Genuine question: If Eliezer is so rational why is he fat?
Q
DirichletConvolution
,
Raemon
4d
Q
6