LESSWRONG
LW

113
Remmelt
12676830675
Message
Dialogue
Subscribe

Research coordinator of Stop/Pause area at AI Safety Camp.

See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
12Evolution is dumb and slow, right?
1d
0
152MAGA speakers at NatCon were mostly against AI
10d
71
3Hawley: AI Threatens the Working Man
10d
1
7Invitation to lead a project at AI Safety Camp (Virtual Edition, 2026)
11d
0
25Hunger strike #2, this time in front of DeepMind
12d
0
18AI Safety Camp 10 Outputs
12d
0
11Hunger strike in front of Anthropic by one guy concerned about AI risk
13d
4
115Anthropic's leading researchers acted as moderate accelerationists
16d
69
7 Some mistakes in thinking about AGI evolution and control
2mo
0
12Deconfusing ‘AI’ and ‘evolution’
2mo
9
Load More