LESSWRONG
LW

1041
mmKALLL
7120
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Intelligence Symbiosis Manifesto - Toward a Future of Living with AI
mmKALLL2mo10

I think there are a few assumptions here that don't quite match what's being proposed. As I understood it, the manifesto implies that we should spend more resources to solve alignment/control. It also doesn't imply that the coexistence would necessarily be equal; Yamakawa-sensei acknowledges that there is limited value humanity could provide to an all-mighty superintelligence. However, the manifesto encourages exploring avenues in which humanity could maintain some aspects of symbiosis or flourishment, even in the case that control fails.

One way to think about it could be as finding alternatives to S-risks in worlds where ASI happens and is not entirely aligned nor misaligned, but somewhere in between. Depending on one's priors, the probability of that happening is not negligible.

Reply
Feedback wanted: Shortlist of AI safety ideas
mmKALLL2mo*10

Thank you for the suggestion. This is something I've both considered and done in practice, having received coaching from some of the people/orgs you've mentioned. I should get in touch with OP to see if they'd like to fund any of this work though, as that could be a useful feedback signal.

Thank you for sharing your knowledge on existing work in the below comment as well - it's super helpful!

Reply
8Feedback wanted: Shortlist of AI safety ideas
3mo
3