This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
LW
Login
rohinmshah's
Shortform
by
Rohin Shah
18th Jan 2020
AI Alignment Forum
1 min read
142
13
Ω 10
This is a special post for quick takes by
Rohin Shah
. Only they can create top-level comments. Comments here also appear on the
Quick Takes page
and
All Posts page
.
Mentioned in
22
[AN #98]: Understanding neural net training by seeing which gradients were helpful
18
[AN #159]: Building agents that know how to experiment, by training on procedurally generated games
17
[AN #96]: Buck and I discuss/argue about AI Alignment
6
[Crosspost] A recent write-up of the case for AI (existential) risk
rohinmshah's Shortform
36
Rohin Shah
51
Rohin Shah
1
Not Relevant
4
Rohin Shah
15
Sammy Martin
4
Rohin Shah
4
Not Relevant
19
Rohin Shah
17
Rohin Shah
2
DanielFilan
4
Rohin Shah
2
DanielFilan
17
Rohin Shah
6
ESRogs
11
Rohin Shah
6
ESRogs
6
Rohin Shah
4
ESRogs
4
Matt Goldenberg
4
Rohin Shah
3
Tim Liptrot
2
Rohin Shah
3
Tim Liptrot
16
Rohin Shah
2
DanielFilan
2
Rohin Shah
11
Ben Pace
10
Rohin Shah
4
Raemon
4
Rohin Shah
8
Rohin Shah
7
Rohin Shah
6
Rohin Shah
6
Rohin Shah
6
Rohin Shah
6
Rohin Shah
4
Rohin Shah
2
Rohin Shah
2
Matt Goldenberg
4
Rohin Shah
2
Rohin Shah
2
Rohin Shah
1
David Scott Krueger (formerly: capybaralet)
3
Rohin Shah
2
Rohin Shah
4
Lukas Finnveden
2
Rohin Shah
33
Rohin Shah
7
Daniel Kokotajlo
3
Rohin Shah
5
Daniel Kokotajlo
3
Rohin Shah
5
Daniel Kokotajlo
3
Rohin Shah
5
Daniel Kokotajlo
4
Rohin Shah
4
Daniel Kokotajlo
4
Rohin Shah
7
matto
6
Rohin Shah
4
Pattern
2
Rohin Shah
2
Pattern
2
Rohin Shah
2
Ofer
1
Rohin Shah
2
Past Account
7
Rohin Shah
24
Rohin Shah
6
DanielFilan
4