I've been a lurker and learned a lot from this community. So today I would like to share one (slightly crazy) idea for increasing the probability of good outcomes from ASI. This post will likely be controversial, but I believe the idea deserves serious consideration given the stakes involved.
ASI will probably kill us all. Compared to other destructive technologies, it is often said that AI differs in that we have a single shot at building an aligned ASI. Humans mostly learn by first making mistakes, however it seems likely that the first AI catastrophe would also be the last, as it will annihilate all of humanity. But what if it is not?
Our best move to survive as a species might then be the following: to engineer ourselves an AI catastrophe. In other words, we should vaccinate humanity against ASI. This could involve using some powerful AI (but not too powerful) and releasing it in the wild with an explicitly evil purpose. It is of course extremely risky but at least the risks will be controlled as we are the ones engineering this. More concretely one could take a future version of agentic AI with long task horizon (but not too long) and with instructions such as "kill as many humans as possible".
This sounds crazy but thinking rationally about this, it feels like it could actually be the best move for humanity to survive. After such an AI is released in the wild and starts destroying society, humanity will have to unite against it, just like how we united against Nazi Germany. Crucially, an engineered catastrophe differs from a natural one: by controlling the initial conditions we dramatically increase the probability of eventual human victory. The AI catastrophe should be well-calibrated for humanity to eventually prevail after a long and tedious fight. A lot of destruction would result from this, say comparable to World War II, but as long as it is not completely destructive, humanity will rebuild stronger. After such a traumatizing event, every human and every government will be acutely aware of the dangers of AI. The world will then have a much higher chance to unite and be much more careful in building safe ASI (or deciding to never build it). After all, World War II is responsible for much of the peace we have enjoyed in the last decades, so one may argue that the destruction it caused was worth it. For ASI, if we believe that everyone dying in the current scenario is an almost certainty, vaccination, as crazy as it seems, becomes the best strategy.