Biomedical Engineering > Philosophy of AI student trying to figure out how I can robustly set myself up for meaningfully contributing to making transformative AI go well, likely through AI governance and/or field building.
43. This situation you see when you look around you is not what a surviving world looks like.
A similar argument could have been made during the cold war to argue that nuclear war is inevitable, yet here we are.
Is it conclusive the meetup will be tomorrow? There seemed to be some uncertainty at the Eindhoven meetup...
Thanks for this post! I've been thinking a lot about AI governance strategies and their robustness/tractability lately, much of which feels like a close match to what you've written here.
For many AI governance strategies, I think we are more clueless than many seem to assume about whether a strategy ends up positively shaping the development of AI or backfiring in some foreseen or unforeseen way. There are many crucial considerations for AI governance strategies, miss or get one wrong and the whole strategy can fall apart, or become actively counterproductive. What I've been trying to do is:
I'm just winging it without much background in how such foresight-related work is normally done, so any thoughts or feedback on how to approach this kind of investigation, or what existing foresight frameworks you think would be particularly helpful here are very much appreciated!