LESSWRONG
LW

3431
idk_______
1010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Existential AI Safety is NOT separate from near-term applications
idk_______3y21

Without enough focus or at least collaboration with shorttermism there will be an existential risk. Not from AI, from humanity. I would not be surprised if shit popped off long before AI even got a chance to kill us

Reply