LESSWRONG
LW

Remmelt
8656027675
Message
Dialogue
Subscribe

Research coordinator of Stop/Pause area at AI Safety Camp.

See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
22Our bet on whether the AI market will crash
2mo
2
5List of petitions against OpenAI's for-profit move
2mo
1
12Crash scenario 1: Rapidly mobilise for a 2025 AI crash
3mo
4
32Who wants to bet me $25k at 1:7 odds that there won't be an AI market crash in the next year?
3mo
19
51We’re not prepared for an AI market crash
3mo
12
26OpenAI lost $5 billion in 2024 (and its losses are increasing)
3mo
15
5CoreWeave Is A Time Bomb
3mo
0
40Map of all 40 copyright suits v. AI in U.S.
3mo
3
36We don't want to post again "This might be the last AI Safety Camp"
5mo
17
3What do you mean with ‘alignment is solvable in principle’?
Q
6mo
Q
9
Load More