459

LESSWRONG
LW

458
AI
Frontpage

3

You Can't Really Bet on Doom

by Jack_S
21st Sep 2025
Linkpost from torchestogether.substack.com
8 min read
1

3

AI
Frontpage

3

You Can't Really Bet on Doom
1Karl Krueger
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 3:17 AM
[-]Karl Krueger1h10

You can play to your outs. If there's a 95% chance the universe disappears overnight, then you plan for the 5% chance that it doesn't. In that 5% case, you do want the retirement savings. Meanwhile you can still consistently advocate that people not build universe-vanishing machines, to try to increase the 5% to maybe 6% or more.

Reply
Moderation Log
More from Jack_S
View more
Curated and popular this week
1Comments

There’s a lot of merit to the claim that we should all be trying to “bet on our beliefs”, or “put our money where our mouth is”. To convince people that we believe what we claim to believe, we should attempt to demonstrate sensible economic behaviour given our stated beliefs about the world.

If we insist that Bitcoin is a bubble, we shouldn’t be HODLing. If we proclaim belief that Elon Musk is a time-bomb waiting to go off (again?), we shouldn’t be all-in on Tesla or xAI. If we think that a benevolent yet mercurial God is ready to send us to a heaven of infinite joy or a hell of endless torture, we should be less scared of socially non-conformity and more scared of misunderstanding obscure passages in Romans 3. 

And if we are convinced we’re all going to die by 2030, we shouldn’t be planning for our children’s careers, or saving for retirement.

Although I’m not fully in the latter category, I am getting worried that AI could trigger catastrophic outcomes—biological or nuclear warfare, misaligned AI destroying all human life and value, or weird AI multi-agent dynamics generating astronomical suffering. Even those of us with less extreme doomer views might now be faced with the awkward question of how to act, given that our beliefs seem less and less compatible with socially accepted human norms and normal economic behaviour.

Paul Bloom recently wrote about a convinced doomer academic:

I recently attended a festival that featured a panel discussion on Artificial Intelligence. One of the speakers was a computer science professor who told us that there was a 99% chance that AIs would render humanity extinct in ten years or so. He was an enthusiastic speaker with great comic energy, and he cracked us up as he explained how an AI could use bitcoin to bribe people online to create a supervirus that kills us all.

I talked to him later and…asked him how his doomer stance affected his own life. Does he save for his children’s education; does he put aside money for retirement? He ducked these questions. Which is fair enough—none of my business. But if I had to bet, part of the reason he ducked them is that he knew where I was going with this and didn’t want to admit that his doomer views had no effect at all on how he lived his life.

I don’t know if Paul Bloom’s skepticism is warranted here, but he’s not alone in his mistrust. Bryan Caplan and Tyler Cowen have frequently challenged “AI doomers” to put their money where their mouth is, frequently using their unwillingness to do so to dismiss their ideas. The idea seems to be that, if you haven’t even got the conviction to shift your own behaviour based on your predictions, how do you expect to convince skeptics?

I find these arguments frustrating, but they have some merit to them--there are some sensible behaviour choices you might make from within the confines of the AI doomer camp.

Plausible Economic Behaviour, Given Doom

First, if you think that AI will gradually lead us into significant catastrophe, it will probably harm the AI labs themselves. As Tyler Cowen argues, people who understand the markets well can: “Short some markets, or go long volatility”. You can make money betting against Nvidia, DeepMind, or Microsoft, or buy typical hedges like gold or commodities. You can enjoy a few months of seeing your “short stocks” explode before…seeing everything else explode.

Second, you may believe that there’s a high chance of non-existential collapse, such as rogue AI triggering a nuclear exchange, or catastrophic misuse of bioweapons. In this world, you may expect the collapse of financial systems, while leaving a lot of the physical world intact. Despite this seeming like an unlikely scenario, planning for this actually seems relatively fashionable in some circles, probably on account of prepping for the apocalypse feeling like enormous fun to a certain personality type. It's increasingly in vogue to buy up land in New Zealand, install a well-equipped post-apocalypse bunker, or invest in resources that will hold value in such a world, such as weapons, or tinned goods.

A luxury bunker in South Dakota

Third, and most compatible with true doomerism, you can stop worrying and love hedonism. If you think that doom is literally inevitable, all resources are futile, so you may as well take all your money out of your pension, remortgage your house, and just dedicate the next five years to really enjoying yourself. You can actually make this “hedonism” into a concrete bet—as many on LessWrong will know, Bryan Caplan has an ongoing bet with Eliezer Yudkowsky, where Eliezer gets 13 years to have as much fun as humanly possible with $100 before he has to pay back $200 (inflation-adjusted) in 2030 in the case of human survival. Some of us either have very high discount rates (the present is worth much more than the future), or are pretty bad at saving for the future anyway, so this doesn’t function as a great signal; but at least you’ll have successfully addressed the “if you think we’re doomed, why are you saving for your retirement?” argument.

Finally, you should be going all in on your chosen survival strategy. If you think that all hope of human survival may rest on stopping AI, and that there is at least some chance of this process working, then you should be dedicating a lot of your time and money to this endeavour. On this, at least, many of the prominent doomers (at least the visible ones—selection effect acknowledged) seem to be acting rationally.

Why not bet on doom?

There are a few reasons why our doomer brethren will not be tempted by these options.

1. You might be “directionally correct”, but fatally wrong on timing

Suppose you predict collapse in 2028–2030 and you bet heavily against those who think it will never come. Instead, 2028-2030 brings a period of unprecedented economic growth before a rogue AI escapes and causes collapse in 2032. You tragically lose your bet and, the real kick in the teeth, the world ends anyway.

Both betting markets and stock markets hate bad timing. Many doom strategies (e.g. shorting the market assuming a catastrophe will arrive) will lose money long before they ever pay off, if they ever do, and there’s a good chance that the world will have changed irrevocably such that the conditions of the bet no longer look like a great deal.

Your children and dependents may also might resent your poorly timed bets. Fortunately, I’m British, and would never consider anything so crass as the “college fund”, mentioned in the Bloom article. But I can already imagine my newborn baby’s disappointment about his 18th birthday card: “Sorry, no present. Frankly, I’m astonished we lasted this long.”

Generated image

2. The value of most things collapses with the system

If your scenario involves total civilisational collapse or extinction, any profits you made with ingenious doom hedges may suddenly be denominated in slightly radioactive dust. And good luck tracking down your debtors and convincing them to return your investments in a world where the banking and communication system has evaporated.

3. Too many uncertain pathways

Doom, yes, but what kind of doom? Nanorobots? Nuclear war? Disempowerment? A corporate power lock-in? Paperclips?

The whole premise of doomers not stating their expected scenario is that we’re too stupid to know how and why an inscrutable superintelligence would destroy us. Humans can no more predict the means of our destruction than a paleolithic wolf would be able to predict the embarrassing domestication of his poodle descendants.

Each version of “doom” can suggest radically different, and contradictory economic responses. Some future scenarios (gradual disempowerment) might look like “explosion of economic value alongside extreme human disvalue”, where the best strategy is “save now, hedonism during this explosion of value”; while other scenarios might look like a gradual decline. It's just too uncertain for any clear prescriptions.

4. Imbalanced social norms

This is the big one, which I think explains most of the reason that we see normal economic behaviour from people with beliefs that are anything but normal.

In the “no doom” world, there’s a cluster of clearly established plans that society expects of you: go to university, build interpersonal skills, buy a suit, get a typically high paying job, invest in tech, buy a suburban house, save for retirement, buy life insurance etc. In most “doom-ish” worlds, we have no such clear blueprint. We have little but prepper videos, a handy printout of The Knowledge or the ALLFED research portfolio, and a few vague heuristics in LessWrong posts.

This is practically awkward. You have to do your own research under significant uncertainty, and you can’t just default to “whatever everyone else in my social class does”.

But it’s also socially awkward. Conformity is an important status marker for most of the world, even weird people who believe in doom! I'm lucky enough to have friends and family who would be forgiving of me to adopt a bizarre lifestyle if I so chose. But for people with conventional professional roles and social circles, making unconventional economic choices can reduce people’s trust in you, and damage your relationships.

This is especially important for those actively trying to persuade people to take AI seriously—being openly conformist is a symbol of the very seriousness you’re trying to convey. There’s a heavy tax on any nonconformist choices you make when signalling your views through your behaviour!

So a conversation between the skeptic (let’s say you’re talking to a funder, or policymaker) and the doomer may look like:

Skeptic: Do you save for your retirement? If so, why would I take your views seriously?

Doomer: Aha! Well more fool you, I’ve actually taken all my retirement savings out, and split it between a debauched farewell tour of all the pleasures of civilisation, a factory for converting petroleum into glucose, and a bunker filled with weapons, tinned goods and oxygen. Oh yes, and I donate half my earnings to Pause AI!

Skeptic: I’m terribly sorry. You’re clearly a very serious person. (Walks away slowly without making eye contact)

The Bottom Line

It’s almost tautological that people tend to conform to social norms, even when their beliefs differ from those that underpin these norms. In the case of a more personal doom, there’s a large literature on spending prior to death, and people are surprisingly conservative when they know death is near. Besides some moderate effects on donations, investment, and spending down after a fatal diagnosis (and, obviously, increased medical spending), knowing you’re going to die soon doesn’t really affect your non-medical economic behaviour very much at all. Although it’s not “behavioural” enough for my liking, denial of death models even formalise why people don’t act as if “the end is nigh”; they resist updating fully on mortality and therefore stick to standard patterns.

But Paul Bloom’s interlocuter at the start of the article does seem to have failed at communicating his credibility here. People who state high p(doom) estimates can be more convincing if they show that they've at least thought about the implications on their own behaviour. For someone (like myself) who thinks that various permutations of doom are possible, but ultimately unpredictable in what form they may take, it makes most sense to change your behaviour in a few ways:

  • Signal where necessary, through economic behaviour and planning, that you’re not expecting an “economically normal” future
  • Place intermediary, rather than doom-based, bets and predictions; you will be far more convincing if your world model is correct about a bunch of other things!
  • Spend a lot of time trying to save the world (in whatever ambitious or modest ways you can)

But, unless you’re in a position where it’s socially fine to do so, don’t burn through too many of your weirdness points on visible and bizarre economic behaviour.