If you write edgy emails, and they are discovered by HR during the correct moral panic, this could end your career - this would make you black pilled af on corporate/academia. Of course, most edgy emails never encounter the correct moral panic or escape the eyes of whomever wants to cancel you. But an aggressor and the correct moral panic is not sufficient for you to get canceled - you need to be in a social position that is cancellable by your aggressor (or by the PR company of whomever your aggressor is sleeping with). You c You can’t cancel people that have all the power or people that are powerless, you can only kill them or be killed by them, which is quite different; See Ho Chi Minh or McAfee.
Now replace sending an edgy email with developing a new technology and we have a metaphor going. I’m not saying it’s a great metaphor but I am only aiming to provide you with something better than the best metaphor currently available - in part with the benefit of hindsight. I am going to use this metaphor to talk about ASI, herein meaning a non-human system that is more intelligent than any human for the purposes of achieving any measurable goal. (And assume that such a system could trivially become agentic and embodied). Unlike most articles that I write this one is aimed specifically at the EA/AI safety/LW/SSC crowd.
I - Nukes are not necessary
To many people it seems apparent that nukes are inevitable, but this is simply false. Nukes came about from a particularly nasty set of circumstances:
The world decided to ostracize the Germans and the Germans decided to go into a psychotic rage murdering everyone
Part of the group they tried to murder hardest were autistic mathematician-physicists working on this interesting thing called fission (interesting because it could help us create novel materials and generate energy)
The US was lead by a group of people just-ever-so-slightly-less-insane than the Germans
The autistic math guys convinced the psychos that the Germans were about to develop a fission weapon
The implications of how bad a fission weapon would be were obvious to everyone - but because we are dealing with a combination of politically-blind nerds (that were almost murdered) and sociopaths (which are at war with the wannabe murderers) - both under the impression that their mortal enemy is about to develop this technology - they spend 5 years an a significant % of GDP to develop fission weapons
The weapons are used (we blame the sociopaths for this one) - the recipes for making them are handed out to everybody (we blame the nerds for this one)
The entity being handed these recipes is lead by the world’s greatest serial killer - and a paranoid schizophrenic on top - he decides to spend double digits of his nation’s GDP on manufacturing these weapons and delivery method for them
The Americans do the same
You have the current ICBM + fission weapon stockpiles Also, I should note, this is not nearly as bad as things could have gotten. I.e. there is a hypothetical world with fission weapons that have 20-30x the yield of current ones, a world where there are millions not thousands, and a world where they are installed in satellites optimally placed to carpet bomb earth. Humans could have gone as far as to ensure the eradication of essentially land based multicellular life with fission weapons but … we sorta didn’t. And developing these weapons took insane amounts of effort, insane amounts of money and a fair amount of luck. It happened during the world’s most destructive event, after 20 years of increasingly violent people killing their way into positions of power throughout the world. The vast majority of “configurations” of humanity end up ignoring nuclear weapons the same way we are presently ignoring the vast majority of “this could kill all humans technology”.
Now, I’m not saying nukes wouldn’t have been developed e.g. in the 60s, but by the time the 60s roll around you have a few things going for you:
World leaders almost unanimously, want to avoid murdering people - and are surrounded by other people that will keep their murdering in check
Many countries have developed technologies for precision strikes and assassination to halt any scale-up of nuclear development
Most scientists opposed the development of nuclear weapons
There are joint projects (e.g. eradicating hunger) that smart people prefer to coordinate around, making them uninterested in nukes
Supply chains are more globalized and better monitored, making it easy for a multinational agreement to make nuke or ICBM development very costly
There are frameworks for enforceable international agreements that will support efforts to stop any given party from developing nukes
This is hardly a hypothetical, i.e. this exact configuration has helped us not develop biological weapons. Which are about as banal as a modified version of smallpox that’s been iteratively made vaccine resistant in cell cultures then distributing variants to global caches (I am glossing over details here, but this project is hardly one that requires more than 100 smart people and a few billion dollars). There could be viruses engineered to kill ~80% of humans ready to deploy en masse by the US, the USSR and China right now. Yet they don’t exist, and their development is borderline impossible - not because they would cost a lot of money to develop, but because whistleblowers would surface the issue and both intra and inter national pressure would severely harm the regimes developing them (+ everyone knows the other side could develop them in a pinch). Nukes exist primarily as an accident of fate that ended up normalizing them and making people afraid to de-arm (and we still managed to convince people to destroy around 90% of them). Indeed, there is a world where Medvedev managed to wrestle power from Putin (note, Medvedev is not actually insane, the consensus seems to be he is forced to adopt his current media persona to avoid being killed for his coup-lite) and where democrats don’t run an unpopular candidate in 2016 - And in this world it is conceivable to see how by now we might be on our way to fusion arsenals in dozens.
I don’t know the probabilities for a world without nukes - Indeed I think reasoning about probabilities at this level makes little sense - I just know that it would have been possible, you don’t need that much imagination to see how it would have been possible, and there are likely dozens of paths we could have taken to achieve this. I am stressing this idea to counter an accidental psyops certain subcultures (e.g. EAs) have pulled on their members: Of course the world is basically guaranteed to have developed to a point where nuclear MAD is a thing. Nukes are an edgy email that we could have thought about 10 years later, and at that point it would have been harmless.
II - AI Narratives, Attention and Money
Nukes happened due to a (false) narrative that managed to direct the attention and money of people into a stupid project. A similar thing seems to have happened with “AI”. A few small communities became obsessed with the idea of human-like AI, primarily noting the danger such a technology would pose. Their reaction was a collective neurosis that added money and attention to this technology. Sufficient money and attention to build “proofs of concept” for this, in the form of modern multimodal language-centric models. It should be noted that the way we got to this model is very path-dependent. I.e. it came about as a result of massive investment in machine learning from investors and researchers that, instead of wanting to solve any particular problem, instead desired to create human-like intelligence. Indeed, the current research approaches are very inefficient for solving any particular problem, hence why they are leaking massive amount of money and require double digit percentages of GDP to be invested in infrastructure to support their scaling up to 2028 and beyond.
If we look at supervised learning techniques, wherein ML models had explicit “goals” mapping directly onto real world tasks, these techniques lead to much less scary models (i.e. small and problem specific) - which can still achieve most of the functions we need LLM for including:
Generating functional code from very high level DSL, enough to allow someone that “can’t code” to produce a large swath of applications within a few days of learning the tools
Translate between all languages of the world
Search the internet and compile results to almost any question
“Understand” multimedia well enough to translate them into interpretable parameters for a variety of other models Heck, even “language” can be solved by models that are much less scary - in that they have much fewer potential latent capabilities. A good example of this would be the RWKV architecture, which “flops” hard in a few areas that make it clearly not “generally intelligent” but can solve the vast majority of issue LLMs can - while being much closer to complex probabilistics a dictionary lookup than a general intelligence. Indeed, it seems that anyone who wanted (or wants) to make money of “AI” would be better served investing their money into applying supervised learning techniques to business problems. The current investment in AI-as-in-general-and-superintelligent comes from a narrative of future value-capture from whomever is first to develop this technology - and looks awfully uneconomical from any other point of view. I.e. the current trend of AI investment is premised on the idea of AI taking over the world.
All of these uneconomical AI companies built on the premise of taking over the world have been started by nerds from loosely the same community after being convinced that AI poses an existential threat to humans. This is quite similar to the the Hungarian physicists creating the fission bomb under the premise that fission bombs pose an existential threat to humanity (note that back then, it was hard to predict how dangerous these weapons were, and they fell on the end of the spectrum). Indeed, the stance of the Hungarian physicists was much more sane than that of the AI company founders. They assumed (partially correct), that past a certain critical mass of weapons any rogue actors could be punished for developing them further and the threat of mutual assured harm would stop anyone from creating more of them. It should be noted that none of the current AI company founders have such a narrative, as far as I can tell their (public) positions are:
Elon: Human race might be bad and can certainly be afforded as casualty in the process of spreading silicon-based intelligence into space
Sam: The human race is not a salient concept when thinking about my P/E ratio which should be good because I need to IPO
Dario: The human race should ideally be kept in a nice set of zoos and I will be sad if they all die instead This should be contrasted with the way a more capable and mature actor, Google, has acted. Wherein Google seems to try to create profitable tool-like AI for solving real problems, did not race towards ASI creation (in spite of having the most advantageous position to do so), and is begrudgingly following trends. It could be argued that Google is a fast follower that is not aiming at developing ASI, but just catching up with useful capabilities because they lack vision … or, it could be argued that it’s a mature capitalist institution like Google is hard-pressed into making investment on a premise that sounds like:
Let’s waste a bunch of money in order to kill all humans Whether the approach Google is taking is incidental or not, I think that the communities focused on “the problem of ASI killing all humans” should be worried that every single person currently trying to create ASI that kills all humans stems directly from them.
III - Alternative Focuses for Attention
The reasons ASI obsessed nerds would provide for why developing an existential-risk technology is entirely ethical is its imminence (and, implicitly, if it’s imminent, we might as well do it first). There is some hand-waving from Dario/Anthropic around this being necessary for the sake of “the other bad person not developing their own ASI which is more likely to kill all humans”, but this is perfunctory at best and can be easily disproven given their current approach. While this reasoning is a lot more suicidal than the reasoning for developing nukes, I hope the parallels are obvious. The reason we cannot imagine why ASI cannot be prevented mirror the reasons why we can’t imagine a nukeless world, in spite of the fact that we live in a bioweapon free world - achieved by methods which could much more easily have prevented nukes were they delayed by a few years. A large group of people simply fail to imagine the mere possibility of a world where a destructive technology, once envisioned, cannot be averted.
There are various blindspots necessary for one to further this narrative:
Everyone else is trying to create ASI (no, it’s still only ASI obsessed nerds, essentially every other human including your investors would prefer it if you focused on something else - ideally that produces wealth)
Our control over the technology means that we can steer it (in spite of the fact that precursors to said technology routinely leaks every few months)
There is nothing we can do to stop it
I believe that, with a bit of imagination, there are a slew of things that could in-principle avert dangerous ASI. I am going to briefly list a few that are not only possible but could be spun into a trillion dollar companies, i.e. with a good enough narrative money and effort could be direct to them:
A for-profit diplomatic venture that always furthers peace, cooperation and trade between countries and groups. This rakes in money by making trades beforehand and charging fees from third parties benefiting from the agreements. It doesn’t currently exist due to corrupt incentives, and all for-profits that execute on related goals are too small minded to aim for the position of global peace maker. Its hypothetical market cap is easily in the trillions and its moat much more defensible than that of any ASI company.
A company that takes a serious stab at BCIs, aiming for GBs of bandwidth every second and not sniping themselves with medical side projects. Instead focusing solely on problem that look like “can we build a BCI that allow a mathematician to interface with external compute near-instantaneously as part of their problem solving flow e.g. allowing external compute to create complex mental images in their mind that aid in finding solutions”. This becomes insanely profitable by selling to traders and all research companies. Its hypothetical market cap is easily in the trillions and its moat much more defensible than that of any ASI company.
A company aimed at simplifying software and increasing security by creating open standards and open source software for everything. Tackling domains ranging from email, to spreadsheets, to accounting. It makes money by selling the hosted version of the best open source software in each category. It uses a model similar to Google/Meta/Spotify that allows it to foster in-house talent making it capable to compete with hundreds of thousands of SAAS companies. Its hypothetical market cap is in the trillions but it’s not easily defensible, mirroring issues that ASI companies have. These ideas are not only harmless moonshots that advance humanity, but are directly capable of averting a lot of the dangers from ASI. Number 1 allows for the kind of coordination needed to stop emerging ASI during the a hypothetical “takeover” phase. Number 2 pushes the boundaries of human intelligence and allows it to partially scale with compute, giving us more time or hypothetically negating the problem entirely (assuming it leads to a world that becomes resource-constrained as opposed to intelligence constrained, i.e. all actions we take are borderline optimal given known information). Number 3 simplifies software and makes it more secure, cutting off the most obvious ways by which any ASI building company can prosper and by which a rogue ASI can accumulate power. I am giving these examples specifically because they are examples I have pitched to builders and investors that believe ASI is a real concern. The specific reason ASI obsessed people seem to have for dismissing such projects boil down to the same circular doom logic that makes nukes inevitable.
IV - My hope on how readers will update
I cannot predict the future, and while I am certain to buy some options when the AGI-IPO-jubilee arrives, I will venture no prediction as to when the bubble with burst and how self-destructive things would have gotten past that point. I can see a world where a failure to automate white collar jobs combined with OS models catching up completely tanks AGI companies forcing acqui-hires. I can see a world where we waste dozens of trillions of dollars in the lead up to massive wars which algorithmically controlled drone swarms with LLM enabled hacking and rogue actors developing state of the art terror weapons. And I can see a world where we automated a bunch of boring office labor, ride around in self driving cars, further equivalize access to knowledge and progress our understanding of the world forward by outsourcing a lot of thinking humans are suboptimal at. It’s very hard for me to see a world where a superintelligence eradicates humanity; since we seem to simply be too far off from that world - humans are still able to agentically coordinate to stop threats and a hypothetical fully-automated society/corporation/system displacing humans would be meet with resistance by biological life that is surprisingly adept at finding exploits in brittle systems and has had billions of year to evolve elasticity. I think we are failing to see this sort of massive coordination right now, since ASI is, as of yet, failing to be a threat - but in a world where ASI would threaten to take away too much agency from humans - healthy societies would react and unhealthy ones will be automated away too quickly and collapse under the weight of dysfunction due to this automation happening “too soon”. There are few patterns that repeat throughout history, and even fewer that repeat with absolute certainty. The single one I can name is that myopic materialist hyperoptimization gives way to multipolar humanism - and this happens constantly in spite of every system we build being a materialistic hyperoptimizer, and in spite of people finding no solid argument for why multipolar humanism should work, plenty of argument why it should fail, and even more arguments for why the current flavor of myopic materialism will take over the world. But I think this is an “insane” position and I understand how, from this point in history, to many people, existentially threatening ASI seems inevitable. If so, I hope I can convey two frames: a) Even if the thing you fear seems inevitable, you should assume that you are not seeing the full picture (since we never do) and neurotically circling the fear will not help. Instead, you should expand your picture of the world by working on efforts to bring about capabilities that can help stop the thing you fear - they might initially seem doomed to fail - but once you start a new worldview will emerge. b) When the thing you fear doesn’t come to pass, if you failed to engage in (a) and instead spent your time obsessing around the fear. You should use this as a signal to shift your focus towards productive efforts, not simply update towards a new way of being neurotic about your fear. Out of all the edgy emails we could have sent in the 21st century, almost none would have gotten us fired from our academic job. Indeed, a lot of edgy emails would have sent the sort of signal that might have made us harder to fire - while still having the desired reaction to elicit a laugh. But we choose to obsess around the one edgy email we assumed could get us fired, and after spending all of our effort on crafting this email we are investing all of our savings into making sure every person in the world knows about this. That is not a sign of honest risk modeling, it is a sign of insanity.
If you write edgy emails, and they are discovered by HR during the correct moral panic, this could end your career - this would make you black pilled af on corporate/academia. Of course, most edgy emails never encounter the correct moral panic or escape the eyes of whomever wants to cancel you.
But an aggressor and the correct moral panic is not sufficient for you to get canceled - you need to be in a social position that is cancellable by your aggressor (or by the PR company of whomever your aggressor is sleeping with).
You c
You can’t cancel people that have all the power or people that are powerless, you can only kill them or be killed by them, which is quite different; See Ho Chi Minh or McAfee.
Now replace sending an edgy email with developing a new technology and we have a metaphor going. I’m not saying it’s a great metaphor but I am only aiming to provide you with something better than the best metaphor currently available - in part with the benefit of hindsight.
I am going to use this metaphor to talk about ASI, herein meaning a non-human system that is more intelligent than any human for the purposes of achieving any measurable goal. (And assume that such a system could trivially become agentic and embodied).
Unlike most articles that I write this one is aimed specifically at the EA/AI safety/LW/SSC crowd.
I - Nukes are not necessary
To many people it seems apparent that nukes are inevitable, but this is simply false. Nukes came about from a particularly nasty set of circumstances:
Also, I should note, this is not nearly as bad as things could have gotten. I.e. there is a hypothetical world with fission weapons that have 20-30x the yield of current ones, a world where there are millions not thousands, and a world where they are installed in satellites optimally placed to carpet bomb earth.
Humans could have gone as far as to ensure the eradication of essentially land based multicellular life with fission weapons but … we sorta didn’t.
And developing these weapons took insane amounts of effort, insane amounts of money and a fair amount of luck. It happened during the world’s most destructive event, after 20 years of increasingly violent people killing their way into positions of power throughout the world. The vast majority of “configurations” of humanity end up ignoring nuclear weapons the same way we are presently ignoring the vast majority of “this could kill all humans technology”.
Now, I’m not saying nukes wouldn’t have been developed e.g. in the 60s, but by the time the 60s roll around you have a few things going for you:
This is hardly a hypothetical, i.e. this exact configuration has helped us not develop biological weapons. Which are about as banal as a modified version of smallpox that’s been iteratively made vaccine resistant in cell cultures then distributing variants to global caches (I am glossing over details here, but this project is hardly one that requires more than 100 smart people and a few billion dollars).
There could be viruses engineered to kill ~80% of humans ready to deploy en masse by the US, the USSR and China right now. Yet they don’t exist, and their development is borderline impossible - not because they would cost a lot of money to develop, but because whistleblowers would surface the issue and both intra and inter national pressure would severely harm the regimes developing them (+ everyone knows the other side could develop them in a pinch).
Nukes exist primarily as an accident of fate that ended up normalizing them and making people afraid to de-arm (and we still managed to convince people to destroy around 90% of them).
Indeed, there is a world where Medvedev managed to wrestle power from Putin (note, Medvedev is not actually insane, the consensus seems to be he is forced to adopt his current media persona to avoid being killed for his coup-lite) and where democrats don’t run an unpopular candidate in 2016 - And in this world it is conceivable to see how by now we might be on our way to fusion arsenals in dozens.
I don’t know the probabilities for a world without nukes - Indeed I think reasoning about probabilities at this level makes little sense - I just know that it would have been possible, you don’t need that much imagination to see how it would have been possible, and there are likely dozens of paths we could have taken to achieve this.
I am stressing this idea to counter an accidental psyops certain subcultures (e.g. EAs) have pulled on their members: Of course the world is basically guaranteed to have developed to a point where nuclear MAD is a thing.
Nukes are an edgy email that we could have thought about 10 years later, and at that point it would have been harmless.
II - AI Narratives, Attention and Money
Nukes happened due to a (false) narrative that managed to direct the attention and money of people into a stupid project.
A similar thing seems to have happened with “AI”.
A few small communities became obsessed with the idea of human-like AI, primarily noting the danger such a technology would pose.
Their reaction was a collective neurosis that added money and attention to this technology. Sufficient money and attention to build “proofs of concept” for this, in the form of modern multimodal language-centric models.
It should be noted that the way we got to this model is very path-dependent. I.e. it came about as a result of massive investment in machine learning from investors and researchers that, instead of wanting to solve any particular problem, instead desired to create human-like intelligence.
Indeed, the current research approaches are very inefficient for solving any particular problem, hence why they are leaking massive amount of money and require double digit percentages of GDP to be invested in infrastructure to support their scaling up to 2028 and beyond.
If we look at supervised learning techniques, wherein ML models had explicit “goals” mapping directly onto real world tasks, these techniques lead to much less scary models (i.e. small and problem specific) - which can still achieve most of the functions we need LLM for including:
Heck, even “language” can be solved by models that are much less scary - in that they have much fewer potential latent capabilities. A good example of this would be the RWKV architecture, which “flops” hard in a few areas that make it clearly not “generally intelligent” but can solve the vast majority of issue LLMs can - while being much closer to complex probabilistics a dictionary lookup than a general intelligence.
Indeed, it seems that anyone who wanted (or wants) to make money of “AI” would be better served investing their money into applying supervised learning techniques to business problems.
The current investment in AI-as-in-general-and-superintelligent comes from a narrative of future value-capture from whomever is first to develop this technology - and looks awfully uneconomical from any other point of view.
I.e. the current trend of AI investment is premised on the idea of AI taking over the world.
All of these uneconomical AI companies built on the premise of taking over the world have been started by nerds from loosely the same community after being convinced that AI poses an existential threat to humans.
This is quite similar to the the Hungarian physicists creating the fission bomb under the premise that fission bombs pose an existential threat to humanity (note that back then, it was hard to predict how dangerous these weapons were, and they fell on the end of the spectrum).
Indeed, the stance of the Hungarian physicists was much more sane than that of the AI company founders. They assumed (partially correct), that past a certain critical mass of weapons any rogue actors could be punished for developing them further and the threat of mutual assured harm would stop anyone from creating more of them.
It should be noted that none of the current AI company founders have such a narrative, as far as I can tell their (public) positions are:
This should be contrasted with the way a more capable and mature actor, Google, has acted. Wherein Google seems to try to create profitable tool-like AI for solving real problems, did not race towards ASI creation (in spite of having the most advantageous position to do so), and is begrudgingly following trends.
It could be argued that Google is a fast follower that is not aiming at developing ASI, but just catching up with useful capabilities because they lack vision … or, it could be argued that it’s a mature capitalist institution like Google is hard-pressed into making investment on a premise that sounds like:
III - Alternative Focuses for Attention
The reasons ASI obsessed nerds would provide for why developing an existential-risk technology is entirely ethical is its imminence (and, implicitly, if it’s imminent, we might as well do it first). There is some hand-waving from Dario/Anthropic around this being necessary for the sake of “the other bad person not developing their own ASI which is more likely to kill all humans”, but this is perfunctory at best and can be easily disproven given their current approach.
While this reasoning is a lot more suicidal than the reasoning for developing nukes, I hope the parallels are obvious. The reason we cannot imagine why ASI cannot be prevented mirror the reasons why we can’t imagine a nukeless world, in spite of the fact that we live in a bioweapon free world - achieved by methods which could much more easily have prevented nukes were they delayed by a few years.
A large group of people simply fail to imagine the mere possibility of a world where a destructive technology, once envisioned, cannot be averted.
There are various blindspots necessary for one to further this narrative:
I believe that, with a bit of imagination, there are a slew of things that could in-principle avert dangerous ASI. I am going to briefly list a few that are not only possible but could be spun into a trillion dollar companies, i.e. with a good enough narrative money and effort could be direct to them:
These ideas are not only harmless moonshots that advance humanity, but are directly capable of averting a lot of the dangers from ASI.
Number 1 allows for the kind of coordination needed to stop emerging ASI during the a hypothetical “takeover” phase. Number 2 pushes the boundaries of human intelligence and allows it to partially scale with compute, giving us more time or hypothetically negating the problem entirely (assuming it leads to a world that becomes resource-constrained as opposed to intelligence constrained, i.e. all actions we take are borderline optimal given known information). Number 3 simplifies software and makes it more secure, cutting off the most obvious ways by which any ASI building company can prosper and by which a rogue ASI can accumulate power.
I am giving these examples specifically because they are examples I have pitched to builders and investors that believe ASI is a real concern. The specific reason ASI obsessed people seem to have for dismissing such projects boil down to the same circular doom logic that makes nukes inevitable.
IV - My hope on how readers will update
I cannot predict the future, and while I am certain to buy some options when the AGI-IPO-jubilee arrives, I will venture no prediction as to when the bubble with burst and how self-destructive things would have gotten past that point.
I can see a world where a failure to automate white collar jobs combined with OS models catching up completely tanks AGI companies forcing acqui-hires.
I can see a world where we waste dozens of trillions of dollars in the lead up to massive wars which algorithmically controlled drone swarms with LLM enabled hacking and rogue actors developing state of the art terror weapons.
And I can see a world where we automated a bunch of boring office labor, ride around in self driving cars, further equivalize access to knowledge and progress our understanding of the world forward by outsourcing a lot of thinking humans are suboptimal at.
It’s very hard for me to see a world where a superintelligence eradicates humanity; since we seem to simply be too far off from that world - humans are still able to agentically coordinate to stop threats and a hypothetical fully-automated society/corporation/system displacing humans would be meet with resistance by biological life that is surprisingly adept at finding exploits in brittle systems and has had billions of year to evolve elasticity.
I think we are failing to see this sort of massive coordination right now, since ASI is, as of yet, failing to be a threat - but in a world where ASI would threaten to take away too much agency from humans - healthy societies would react and unhealthy ones will be automated away too quickly and collapse under the weight of dysfunction due to this automation happening “too soon”.
There are few patterns that repeat throughout history, and even fewer that repeat with absolute certainty. The single one I can name is that myopic materialist hyperoptimization gives way to multipolar humanism - and this happens constantly in spite of every system we build being a materialistic hyperoptimizer, and in spite of people finding no solid argument for why multipolar humanism should work, plenty of argument why it should fail, and even more arguments for why the current flavor of myopic materialism will take over the world.
But I think this is an “insane” position and I understand how, from this point in history, to many people, existentially threatening ASI seems inevitable. If so, I hope I can convey two frames:
a) Even if the thing you fear seems inevitable, you should assume that you are not seeing the full picture (since we never do) and neurotically circling the fear will not help. Instead, you should expand your picture of the world by working on efforts to bring about capabilities that can help stop the thing you fear - they might initially seem doomed to fail - but once you start a new worldview will emerge.
b) When the thing you fear doesn’t come to pass, if you failed to engage in (a) and instead spent your time obsessing around the fear. You should use this as a signal to shift your focus towards productive efforts, not simply update towards a new way of being neurotic about your fear.
Out of all the edgy emails we could have sent in the 21st century, almost none would have gotten us fired from our academic job. Indeed, a lot of edgy emails would have sent the sort of signal that might have made us harder to fire - while still having the desired reaction to elicit a laugh. But we choose to obsess around the one edgy email we assumed could get us fired, and after spending all of our effort on crafting this email we are investing all of our savings into making sure every person in the world knows about this. That is not a sign of honest risk modeling, it is a sign of insanity.