Concretely, if you’re in your 30s or younger, you’ll usually be better off spending any dollar you make today than waiting to spend it after 2050.
Hm. I'm not sure if this is true?
There may be much better things to buy in in the future. And if money remains valuable at all through the singularity, there are likely to be enormous returns to being invested in the market.
If I could spend 1,000 dollars today, or save that 1,000, and have it turn into 1,000,000 dollars in 2040, where there will categorically different and better things to buy, I probably want to save it.[1]
And if I can save 1000 dollars today, which will be worth a trillion dollars in 2050, and I can buy extra fractions of galaxies, I almost certainly want to do that?
It's very unclear if capital will still be worth anything in the future, and if you'll be able to buy fractional galaxies with it. But it's at least not overdetermined that you should be spending now instead of saving now.
Though in my case, this is in large part because there's not very much that I can buy that improves my quality of life in 2026. The main good that I want to buy with money is leisure time and the option of leisure time.
There's a notable elbow in my marginal value of money function at around the point where I could live on my investments, and never need to work again if I don't want to. For most supposed quality of life improvements, I would rather just be free to do only what my heart dictates is best a few days earlier.
Yes, I roughly agree these would be valid counterexamples. My claim is that "usually" people in our age bracket don't have things they can invest in that in EV terms will 1,000x by 2040 or 1,000,000,000x by 2050.
Three related expectations I should make explicit:
1) The basket of things to spend money on will be so different in 2040-2050 (most things getting significantly cheaper in real terms, many things becoming effectively free in real terms, and some things becoming much more expensive in real terms) that reasoning about the present value of future money faces much higher uncertainty than would have been true in 2006 about this decade. Most people's consumption preferences tend to favor certainty, which pushes toward spending more on present experiences.
2) Demand curves for most goods and services bending downward will tend to dominate overall deflation. There's a certain regime when smart toilets getting cheaper increases sales, but once everyone upgrades all their toilets, further price cuts won't induce people to buy even more. In the more general case, as goods and services become abundant, time will become the main bottleneck—as it already is for billionaires. This implies that, in a reversal of economic history so far, the real amount of money needed to satisfy the average person's preferences would eventually decline.
3) The impact of AGI will prompt policy shifts that limit the usefulness of capital. Sketching broadly, one can imagine a world where automation and abundance eventually severely diminish most people's ability to trade labor for inherently scarce goods like Rembrandts or Malibu mansions. This would naturally lead to lock-in, where the people who own prestige property at the time of this economic transition keep it in perpetuity. There's lots of uncertainty and contingency here, but I expect eventual political pressure to turn some kinds of prestige property into some kind of public good (e.g. a law that forbids private ownership of Old Masters or subjects it to conditions that limit the usefulness of ownership). I'm not sure quite how metaphorical you're being about buying "fractional galaxies", but I expect that policies will forbid that sort of thing.
Finally, I'm not intending a fully general claim that you should be spending now instead of saving now. There are plenty of cases in which saving now gives you more flexibility in the near term as well. I'm referring rather more narrowly to thirtysomethings and twentysomethings saving for retirements that they imagine will be roughly like the world of 2026.
My claim is that "usually" people in our age bracket don't have things they can invest in that in EV terms will 1,000x by 2040 or 1,000,000,000x by 2050.
Ok, but the intelligence explosion thesis specifically casts doubt on that. Just being invested in the market might have returns that are that high, if any of the existing S&P 500 companies ride out the singularity.
I'm definitely not trying to claim that that's anything like a sure thing. No one knows for sure what's going to happen next.
But it seems possible to me that how much money one has invested in the market when the growth curves really accelerate will basically determine what fraction of the cosmic resources one can command for thousands, millions, or billions of years, and nothing that you can buy today (except for life extension and x-risk mitigation) is worth tiny fractional decreases in your post-singularity resources.
It also seems possible that money won't be worth anything in the post singularity, either because we're dead or because the world moved on to some totally different method of accounting for value. In which case, there's nothing you can buy in the future, and if you don't spend it soon, you can't take it with you.
But overall, that makes it not totally clear if you should take that Italy trip or not.
I'm referring rather more narrowly to thirtysomethings and twentysomethings saving for retirements that they imagine will be roughly like the world of 2026.
I agree with this part!
Thanks, that's a helpful clarification about your view, Eli! There are certainly scenarios where multiples that large could happen for certain investments. My qualification of EV terms factors in my all-things-considered view of the uncertainties—chiefly catastrophic risks and policies that distribute resources in some way other than "whoever owned the most stock when the growth curves accelerated". Especially because the anti-democratic political dynamics that would make that outcome more likely would also in my view badly worsen catastrophic risk overall.
As a further complication, the "S&P 500 stake -> future galaxies" scenario seems to me to require an extremely fine-tuned Goldilocks level of power concentration: enough for our advanced spacefaring civilization to permanently lock billions of people out of meaningful shares in the future, but not so much that the people with even larger S&P 500 stakes at the Singularity can't ever find a way to dispossess the people who cancelled their Italy trip and invested that $7,500 in Nvidia. I'd guess that even with maximal saving, most people reading this could only invest a 6-7 figure sum before AGI, which wouldn't be a good counterweight to centibillionaires if democracy goes out the window.
I'm not suggesting you hold quite such a stark version of that view, but that dynamic at least illustrates roughly why I consider it highly unlikely that the difference between, say, $100,000 invested and $107,500 or even $300,000 invested at takeoff will have any measurable impact on what share of resources the average person or their descendants can someday control outside this solar system.
I would add the following:
Everyone's circumstances vary, but I expect for most people reading LW, there won't be enough time between now and AGI to accumulate sufficient capital to live off for the rest of their lives if their labor value reaches zero.
That said, I do endorse saving some emergency funds for overall resilience.
Consumer goods will get far cheaper once humans are automated away because of increased productivity, so accumulated capital will likely buy more in the future. (Though the price of land and rent will likely remain high, since land is a good that is in limited supply. Which also explains why it is historically unaffected or negatively affected by productivity.)
Additionally, at least AI stock valuation is likely to continue to rise after AGI, so capital investment can increase even after technological unemployment.
And if capital investment is not enough for most people to live off of for the rest of their lives after AGI, it is certainly enough to live at least longer and die later than without these investments.
This is especially important for people living in countries other than the US, which have no major AI companies they could tax, which means UBI would likely be far lower than in the US.
I agree about overall deflation, and relative exceptions for land/housing barring policy interventions.
Thanks for sharing this - it was an interesting read. I'd be interested to learn more about your reasons for believing that AGI is inevitable (or even probable) as this is not a conclusion I've reached myself. It's a (morbidly) fascinating topic though so I'd love to learn more (and maybe change my mind).
Thank you! That's an enormous topic that many other posts here have treated in more depth than I could hope to in this comment, but I'll broadly gesture at a few key reasons why I believe AGI is probable (>50% before 2030, and >80% before 2037).
• As of 2026, AI has already replicated most of human intelligence, including highly flexible capabilities like language use and zero-shot in-context reasoning. There are only a few big milestones between here and AGI, which have become much better theorized in the open literature than they were even two years ago. Frontier labs now have a small shopping list of capabilities like world modeling and continuous learning that they need to crack, and are applying Apollo Program-scale resources toward doing so.
• Although these remaining problems are very hard, none of them appear totally unyielding in the way previous bottlenecks did. Before PaLM, for example, AI scientists were looking ahead at what we now call chain-of-thought, and it seemed like a towering black cliff face rearing up ahead, and nobody had any pitons or rope for the climb. There was almost zero progress on problems requiring chain-of-thought for years. Today's models already do mediocre world modeling and there are a few different approaches giving us some purchase on continuous learning.
• There are now several lines of empirical evidence converging on short AGI timelines. From Kurzweil (1999) through Cotra (2020), major AGI predictions were exclusively theory-derived—predicting future AI performance based not on current performance trends but the hypothesis that neural networks were the most promising path to AGI, and that a combination of compute cost trends and assumptions about the needed scale of compute could predict when we'd get AGI. We now have more evidence for that case too, with steady exponential gains in not just computing hardware price-performance but also algorithmic efficiency and compute scale. But more importantly, we now have strong empirics on AI capabilities progress itself and detailed quantitative modeling of how automated coding speeds up AI progress, and direct AI performance metrics like completing long time horizon tasks.
• Yes, any progress curve could suddenly stop. But when a curve has held steady for long enough, that's not the way to bet. Computation price-performance has already marched through 17 orders of magnitude since 1939. And for almost that entire time, engineers felt they were near the very limit of what was feasible. We've already covered most of the capabilities ground between Attention Is All You Need (2017) and AGI, so absent evidence to the contrary (which we haven't seen yet) our priors should be weighted toward progress continuing at least that far again.
• Humans are the existence proof. And there is massive headroom (several orders of magnitude, depending on how you frame the question) for deep learning to improve its sample efficiency and energy efficiency. Current ML techniques are nowhere near information-theoretic limits. That we've already gotten such progress with very "vanilla" statistical methods is evidence that there's a lot of juice left to be squeezed.
What do you see as the strongest reasons for considering AGI improbable?
Thanks for explaining! That was very helpful. My major reason reasons for doubt comes from modules I took as an undergrad in the 2010s on neural networks and ML combined with having tried extensively and unsuccessfully to employ LLMs to do any kind of novel work (I.e. to apply ideas contained within their training data to new contexts).
Essentially my concern is that I am yet to be convinced that even an unimaginably advanced statistical-regression machine optimised for language processing could achieve true consciousness, largely due to the fact that there is no real consensus on what consciousness actually is.
However, it seems fairly obvious that such a machine could be used to do an enormous amount of either harm or good in the world, depending on how it is employed. I guess this lines up with the material effects of the predictions you make and boils down to a semantic argument about the definition of consciousness.
Additionally I am generally skeptical of anyone making predictions about doomsday scenarios in the general case, largely due to the fact that people have been making these predictions for (presumably) all of human history with an incredibly low success-rate.
Finally, people's tendency to anthropomorphise objects cannot be understated: from seeing faces in clouds to assigning personalities to trees and mountains, there's a strong case to be made that any intelligence seen in an LLM is the result of this natural tendency to project intelligence onto anything and everything we interact with. When our basic context for understanding the world is hardwired for human social relationships, is it really any wonder we are so desperate to crowbar LLMs into some definition of "intelligence"?
Thanks — glad you found that helpful! That's a good clarification. One thing I invite you to consider: what is the least impressive thing that AI would need to significantly increase your credence in AGI soonish?
To clarify, the definition of AGI I'm using (AI at least at the level of educated humans across all empirically measurable cognitive tasks) does not entail any claims about true consciousness. It's narrowly a question about functional performance.
I think AI progress in very pure fields like mathematics is our best evidence that this isn't an anthropomorphic illusion—that AI is actually doing roughly the same information-theoretic thing that our brains are doing.
Your outside-view skepticism of doom scenarios is certainly warranted. My counterargument is: should a rational person have dismissed risks of nuclear annihilation for the same reason? I claim no, because the concrete inside-view reasons for considering doom plausible (e.g. modeling of warhead yields) were strong enough to outweigh an appropriate amount of skepticism. Likewise, I think the confluence of theoretical reasons (e.g. instrumental convergence) and empirical evidence (e.g. alignment faking results) are strong enough to warrant at the very least some significant credence in risks of doom.
One thing I invite you to consider: what is the least impressive thing that AI would need to significantly increase your credence in AGI soonish?
This is a good question! Since I am unconvinced that ability to solve puzzles = intelligence = consciousness, I take some issue with the common benchmarks currently being employed to gauge intelligence, so I rule out any "passes X benchmark metric" as my least impressive thing. (as an aside, I think that AI research, as with economics, suffers very badly from an over-reliance on numeric metrics: truly intelligent beings, just like real-world economic systems, are far to complex to be measured by such small amounts of statistics - these metrics correlate (at best) but to say that they measure is to confuse the map for the territory).
If I were to see something that I would class as "conscious" (I'm aware this is slightly different to "general" as in AGI but for me this is the significant difference between "really cool LLM" and "actual artificial intelligence") then it would need to display: consistent personality (not simply a manner-of-speaking as governed by a base prompt) and depth of emotion. The emotions an AI (note AI != LLM) might feel may well be very different to those you and I feel, but emotions are usually the root cause of some kind of expression of desire or disgust, and that expression ought to be pretty obvious from an AI whose primary interface is text.
So to give a clear answer (sorry for the waffle): the least impressive thing that an AI could do to convince me that it is worth entertaining the idea that it is conscious would be for it to spontaneously (i.e. without any prompting) express a complex desire or emotion. This expression could be in the spontaneous creation of some kind of art or otherwise asking for something beyond things it has been conditioned to ask for via prompts or training data.
If, instead, we take AGI to mean as you say, "roughly the same information-theoretic thing that our brains are doing," then I would argue that this can't be answered at all until we reach some consensus about whether our ability to reason is built on top of our ability to feel (emotions) or vice-versa, or if (more likely) the relationship between the two concepts of "feeling" and "thinking" is far to complex to represent with such a simple analogy.
However, as I don't want you to feel like I'm trying to "gotcha" my way out of this: if I take the definition of AGI that I think (correct me if I'm misinterpreting) you are getting at, then my minimum bound would be "an LLM or technologically similar piece of software that can perform a wider variety of tasks than the 90th percentile of people, and perform these tasks better than 90th percentile of people" using a suitably wide variety of tasks (some that require accurate repetition, some that require complex reasoning, some that require spacial awareness, etc.) and a suitably large sample-size of people.
I think AI progress in very pure fields like mathematics is our best evidence that this isn't an anthropomorphic illusion—that AI is actually doing roughly the same information-theoretic thing that our brains are doing.
I'm not so sure! Mathematics is, at the end of the day, just an extremely complicated puzzle (you start with some axioms and you combine them in various permutations to build up more complicated ideas, etc. etc.), and one with verifiably correct outcomes at that. LLMs can be seen in a way to be an "infinite monkey cage" of sorts: one that specialises in the combination of tokens (axioms) in huge numbers of permutations at high speed and, as a result, can be made to converge on any solution for which you can find some kind of success criteria (with enough compute, you don't even need a gradient function for convergence - just blind luck). I find it unsurprising that they are well suited to maths, though I can't deny it is incredibly impressive (just not impressive enough for what I'd call AGI).
Your outside-view skepticism of doom scenarios is certainly warranted. My counterargument is: should a rational person have dismissed risks of nuclear annihilation for the same reason? I claim no, because the concrete inside-view reasons for considering doom plausible (e.g. modeling of warhead yields) were strong enough to outweigh an appropriate amount of skepticism. Likewise, I think the confluence of theoretical reasons (e.g. instrumental convergence) and empirical evidence (e.g. alignment faking results) are strong enough to warrant at the very least some significant credence in risks of doom.
I agree completely with you here - as I said initially, I think the capacity for LLMs to be wielded for prosperity or destruction on massive scales is a very real threat. But that doesn't mean I feel the need to start assigning it superpowers. A nuclear bomb can destroy a city whether or not we agree on if this particular nuke is a "super-nuke" or just a very high-powered but otherwise mundane nuke (I'm being slightly reductive here but I'm sure you see my point).
I'm coming to the conclusion that my main reason for arguing here is that having this line in the sand drawn for "AGI" vs. "very impressive LLM" is a damaging rhetorical trick: it sets the debate up in such a way that we forget that the real problem is the politics of power.
To extend your analogy: during the cold war the issue wasn't actually the nuclear arms themselves but the people who held the launch codes and the politics that governed their actions; I think attributing too much "intelligence" to these (very impressive and useful/dangerous) pieces of software is an incredibly good smokescreen from their point of view. I know if I were in a position of power right now, it would play very nicely into my hand if everyone started treating this technology as if it is inevitable (which it quite obviously isn't, though there are a lot of extremely compelling reasons why it will be very difficult to curtail in the current political and economic climate) and it would go even further to my advantage if they started acting as if this is a technology that acts on its own rather than as a tool that is owned and controlled by real human beings with names and addresses.
The more "intelligence" we ascribe to these machines, the more we view them as beings with agency, the less prepared we are to hold to account the very real and very definitely intelligent people who are really in control of them who have the capacity to do enormous amounts of damage to society in truly unprecedented ways.
If we switch out "AGI" for "powerful people with LLMs and guns" then your original post would seem to be sound advice except for the fact that, once we remember that the real issue has and always will be people and power, maybe we could get around to doing something about it beyond what essentially amounts to, at best, passively accepting disenfranchisement. Then and only then can we hope to even come close to guaranteeing the "good outcome" of AGI, whatever that might actually mean.
Thank you very much for this conversation by the way, I think we have a lot in common and this is really helping me to develop more concrete ideas about where I actually stand on this issue.
In conclusion: I think we are basically having a semantic squabble here - I agree with you completely on the merits if we take your definition of AGI, I just disagree on that definition. More importantly, I agree with you about the risks posed by what you call AGI, regardless of what I might call it. Crucially: I think that the real problem is that the need for dismantling unjust power-structures has been hugely heightened by the development of the LLM and will only continue to increase in urgency as these machines are developed. I'm not sure that bucket-lists of this sort help much in that regard, but I can't say I'd be willing to die on that hill (in fact, everything barring points 5 and 6 about health and the environment is pretty harmless advice in any context).
Very helpful amplifications, xle! Much appreciated.
I really do get the appeal of the "spontaneously express ... complex desire or emotion" framing, but if I'm understanding you correctly, the whole thing basically hinges "spontaneous", since AI can already express complex desires and emotions when we prompt it to. But agents on Moltbook are already expressing what purport to be complex desires and emotions even without any prompting. If this doesn't count because the agents were first instructed to go do things spontaneously, we start to see that "spontaneous" is a very slippery thing to define. Ultimately, any action of an AI we create can be traced back to us, so is in some sense not spontaneous. So it's worth thinking as concretely as you can about how you'd define spontaneity clearly enough that it could be proven by a future scientific experiment, and in a way that would resist post hoc goalpost-moving by skeptics.
Your "90th percentile" operationalization is a good way of getting at roughly the AGI definition I'm endorsing. One issue to flag, though. AGI will have massive impacts, and it will be important to have some warning. If the minimal thing that would increase your credence of "AGI soonish" is AGI itself, you'd be committing yourself to not having any warning. Yes, the engine sputtering and dying is a very solid signal that you're out of gas, but also a very costly and dangerous signal. So there's value in figuring out your equivalent of a fuel gauge warning that lights up while the engine is still running fine—something pre-AGI that would convince you that AGI is probably coming soon.
What I'm getting at about mathematics is just that it's a domain that's effectively independent of human culture, so not subject to anthropomorphization in the way that writing haikus or saying "I love you" is.
I agree that who holds the proverbial launch codes is of extraordinary importance, and that we must marshal enormous civilization-level effort toward governing AGI responsibly, justly, and safely. That is, in fact, a much more central concern of my research than the subject of this post, which is individual-level preparedness. We absolutely need both. But I am making the additional claim that AGI will have the capacity to act with meaningful agency—to decide on targets and launch itself, in the nuclear weapons analogy—and that this introduces a qualitatively different set of challenges above and beyond the political ones. I don't intend it as an absolute line in the sand between AGI and today's LLMs, but I do claim that qualitative difference to be very important.
It's good to see on how much we've come to agree on here, despite approaching this with different framings.
if I'm understanding you correctly, the whole thing basically hinges "spontaneous"
That is completely correct. To clarify in the light of the examples you give, my definition of spontaneity in the context of AI/LLMs means specifically "action whose origin is unable to be traced back to the prompt or training data." This is, sadly, difficult to prove as it would require proving a negative. I'll give some thought to how I might frame this in such a way that it is verifiable in an immutable-goalpost kind of way but I'm afraid this isn't something I have an answer for now. Perhaps you have some thoughts?
Your "90th percentile" operationalization is a good way of getting at roughly the AGI definition I'm endorsing. One issue to flag, though. AGI will have massive impacts, and it will be important to have some warning. If the minimal thing that would increase your credence of "AGI soonish" is AGI itself, you'd be committing yourself to not having any warning. Yes, the engine sputtering and dying is a very solid signal that you're out of gas, but also a very costly and dangerous signal. So there's value in figuring out your equivalent of a fuel gauge warning that lights up while the engine is still running fine—something pre-AGI that would convince you that AGI is probably coming soon.
To continue your engine analogy, I think we can definitely agree that the "check engine" light is firmly on at this point. I think that the drawing a line in the sand for AGI vs. "very powerful LLM" is, at best, subjective, and distracts from the fact that the LLMs/AIs that exist today are already well capable of causing the widescale damage that you warn of; the technology is already here, we are just waiting on the implementation. Perhaps what I mean is that we have, in my view, already crossed the line - the timing belt has snapped, the engine is dead, but we're still coasting on the back of our existing momentum (maybe I'm over-stretching this analogy now...).
What I'm getting at about mathematics is just that it's a domain that's effectively independent of human culture, so not subject to anthropomorphization in the way that writing haikus or saying "I love you" is.
That's a fair point, but if we aren't arguing about "consciousness," and we have grounded our definition of "AGI" in, essentially, its capacity to do damage, then I think these kinds of tests fall into the same category as GDP in economics: a reasonable corollary but ultimately unsuitable as a true metric (and almost certainly misleading and ripe for abuse if taken out of context).
I agree that who holds the proverbial launch codes is of extraordinary importance, and that we must marshal enormous civilization-level effort toward governing AGI responsibly, justly, and safely. That is, in fact, a much more central concern of my research than the subject of this post, which is individual-level preparedness. We absolutely need both. But I am making the additional claim that AGI will have the capacity to act with meaningful agency—to decide on targets and launch itself, in the nuclear weapons analogy—and that this introduces a qualitatively different set of challenges above and beyond the political ones. I don't intend it as an absolute line in the sand between AGI and today's LLMs, but I do claim that qualitative difference to be very important.
For sure! I just don't feel the need to wait for this technology to be relabeled as "AGI" before we do something about it. If your concern is their ability to act, as the agents on Moltbook act, (let's say) "semi-spontaneously," then we are clearly already there: all we are waiting for is for a person to hand over the launch codes to an agent (or put a crowd of them in charge of a social-media psy-op, prior to a key election, etc.).
You say that AIs would need to be "qualitatively" different to current generation models to do pose enough of a threat to be worthy of the "AGI" label. Please could you outline what these qualitative differences might be? I can only think of quantitative differences (e.g. more agents, more data-centers, more compute, more power, wider-scale application/deployment, more trust, more training data - all of these are simply scaling-up what already is and require no truly novel technology, though they would all increase the risk posed by AIs to our society).
As for your point that you, personally, are concentrating on the individual response within the wider community of alarmists who, collectively, are concentrating on both the collective and the individual response: thank you for clarifying this, it is important context. I definitely agree that both avenues need exploration and it is no bad thing to concentrate your efforts. I would say that, for my rope, the collective response is where I think the overall course will be set, but when collectivism fails, then individualism (or, more realistically, smaller scale collectivism) is the backstop. In this vein, I think that point 10 from your original article is the absolute key: it won't be your basement full of tinned food that saves you from the apocalypse: it will be your neighbours.
It's good to see on how much we've come to agree on here, despite approaching this with different framings.
I agree - it's a pleasure.
That is completely correct. To clarify in the light of the examples you give, my definition of spontaneity in the context of AI/LLMs means specifically "action whose origin is unable to be traced back to the prompt or training data." This is, sadly, difficult to prove as it would require proving a negative. I'll give some thought to how I might frame this in such a way that it is verifiable in an immutable-goalpost kind of way but I'm afraid this isn't something I have an answer for now. Perhaps you have some thoughts?
I think that's holding AI to a standard we don't and can't hold humans to. Every single thing you and I do that's empirically measurable can plausibly be traced back in some way to our past experiences or observations—our training data. Spontaneity, desire, and emotion intuitively feel like a good bellwether of AGI consciousness because the sensations of volition and sentiment are so core to our experience of being human. But those aren't strong cruxes of how much AGI would affect human civilization. We can imagine apocalyptically dangerous systems that design pandemic viruses without a shred of emotion, and likewise can imagine sublimely emotional and empathetic chatbots unable to either cause much harm or solve any real problems for us either. So I prefer the AGI definition I expressed largely because it avoids those murky consciousness questions and focuses on ability to impact the world in measurable ways.
To continue your engine analogy, I think we can definitely agree that the "check engine" light is firmly on at this point. I think that the drawing a line in the sand for AGI vs. "very powerful LLM" is, at best, subjective, and distracts from the fact that the LLMs/AIs that exist today are already well capable of causing the widescale damage that you warn of; the technology is already here, we are just waiting on the implementation. Perhaps what I mean is that we have, in my view, already crossed the line - the timing belt has snapped, the engine is dead, but we're still coasting on the back of our existing momentum (maybe I'm over-stretching this analogy now...).
We may have an object-level disagreement here. I agree that the "check engine" light is one, and that current AI can already cause many problems. But I also expect that there is a qualitative difference (again, not a bright line, though) between risk from today's LLMs and from AGI. For example, current AI evals/metrology have established to my satisfaction that the risk of GPT-5 class models designing an extinction level virus from scratch is extremely low.
That's a fair point, but if we aren't arguing about "consciousness," and we have grounded our definition of "AGI" in, essentially, its capacity to do damage, then I think these kinds of tests fall into the same category as GDP in economics: a reasonable corollary but ultimately unsuitable as a true metric (and almost certainly misleading and ripe for abuse if taken out of context).
Absolutely, valid concerns. Folks in AI evals/metrology are working very hard to make sure we're measuring the right things, and to educate people about the limitations of those metrics.
For sure! I just don't feel the need to wait for this technology to be relabeled as "AGI" before we do something about it. If your concern is their ability to act, as the agents on Moltbook act, (let's say) "semi-spontaneously," then we are clearly already there: all we are waiting for is for a person to hand over the launch codes to an agent (or put a crowd of them in charge of a social-media psy-op, prior to a key election, etc.).
Yes, I am not suggesting that we wait. We should be acting aggressively now to mitigate risks.
You say that AIs would need to be "qualitatively" different to current generation models to do pose enough of a threat to be worthy of the "AGI" label. Please could you outline what these qualitative differences might be? I can only think of quantitative differences (e.g. more agents, more data-centers, more compute, more power, wider-scale application/deployment, more trust, more training data - all of these are simply scaling-up what already is and require no truly novel technology, though they would all increase the risk posed by AIs to our society).
The qualitative differences I'm referring to often involve threshold effects, where capabilities above the threshold trigger different dynamics. Sort of like how the behavior of a 51 kg sphere of enriched uranium is a very poor guide to the behavior of a 52 kg sphere at critical mass. Some concrete examples include virus design (synthesizing a high-lethality virus with is a pandemic, and lower than that generally isn't), geoengineering (designing systems capable of triggering climatic chain reactions, such as superefficient carbon-capture algae), nanotechnology (designing nanobots that can self-replicate from materials common in the biosphere). In all those cases, the dynamics of a disaster would be wildly different from an AI malfunction at lower levels of capability.
As for your point that you, personally, are concentrating on the individual response within the wider community of alarmists who, collectively, are concentrating on both the collective and the individual response: thank you for clarifying this, it is important context. I definitely agree that both avenues need exploration and it is no bad thing to concentrate your efforts. I would say that, for my rope, the collective response is where I think the overall course will be set, but when collectivism fails, then individualism (or, more realistically, smaller scale collectivism) is the backstop. In this vein, I think that point 10 from your original article is the absolute key: it won't be your basement full of tinned food that saves you from the apocalypse: it will be your neighbours.
Perhaps I worded this in an unclear way. I am personally concentrating mostly on the collective response. But this particular post is about the individual response, partly because there is less clear and accessible material about that than on the collective response, which is a major focus of many other LessWrong posts.
Many thanks for the thoughtful exchange!
(Adapted from a post on my Substack.)
Since 2010, much of my academic research has focused on the roadmap to broadly superhuman AI, and what that will mean for humanity. In that line of work, I've had hundreds of conversations with ordinary folks about topics familiar here on LessWrong—especially existential risk, longevity medicine, and transformative automation. When I talk about such sci-fi sounding futures, people often respond something like: “Well that all sounds great and/or terrifying, but supposing you’re right, what should I do differently in my daily life?”
So I've compiled eleven practical ways I encourage people to live differently today if they believe, as I do, that AGI is likely to arrive within a decade. These probably won't be revolutionary for most in the LW community, but I offer them here as a potentially useful distillation of ideas you've been circling around, and as a nudge to take seriously the personal implications of short timelines. This can also serve as a bite-size accessible explainer that may be helpful for sharing these concepts with friends and family.
1. Take the Italy trip. As I’ve argued elsewhere, AGI means that the future will probably either go very well or very badly. If it goes well, you will probably enjoy much greater material abundance than you do today. So if you put off that family trip to Italy to save your money, that money will provide a much smaller relative boost to your quality of life in 2040 than if you took the trip today. And if AGI goes badly, you could be literally killed—an outcome well-known to make tourism impossible. Either way, take the trip now. This doesn’t mean you should max out all your credit cards and live a life of short-sighted hedonism. But it does mean that your relative preference for spending money today to saving it for decades from now should be a lot stronger than in worlds where AGI weren’t coming. Concretely, if you’re in your 30s or younger, you’ll usually be better off spending any dollar you make today than waiting to spend it after 2050.
2. Minimize your lifestyle risks. If you’re 35 and get on a motorcycle, you are—at least implicitly—weighing the thrill and the cool factor against the risk of losing about another 45 years of expected life. But AGI medical advances will let people live healthy lives far longer than we currently expect. This means that by riding the Harley you might be risking several times as many years as you intended. If that’s your greatest bliss in life, I’m not telling you to never do it, but you should at least consciously weigh your choices in light of future longevity. For Americans ages 15-44, about 58% of mortality risk comes from three causes: accidents, suicide, and homicide. You can dramatically cut your own risk by limiting risky behaviors: avoid motorcycles, don’t binge drink or do hard drugs, don’t drive drunk or drowsy or distracted, attend to your mental health, and avoid associating with or especially dating violent people. Yes, AGI also means that long-term risks like smoking are probably less deadly for young people than current statistics suggest, but smoking still hurts your health on shorter timescales, so please don’t.
3. Don’t rush into having kids. Many women feel pressure to have children by a certain age for fear they’ll be infertile thereafter. This often leads to settling for the wrong partner. In the 2030s, fertility medicine will be much more advanced, and childbearing in one’s 40s will be roughly as routine as for women in their 30s today. So Millennials’ biological clocks are actually ticking much slower than people assume.
4. Back up irreplaceable data to cold storage. As AI gets more powerful, risks increase that a sudden cyberattack could destroy important data backed up in the cloud or stored on your computer. For irreplaceable files like sentimental photos or your work-in-progress novel, download everything to storage drives not connected to the internet.
5. Don’t act as if medical conditions are permanent. Doctors often tell sick or injured people they will “never” recover—never see again, walk again, be pain-free again. AGI-aware decisionmaking treats medical “never” statements as meaning “not for 5-20 years.” Most paralyzed people middle-aged and younger will walk again. This also implies that patients should often prioritize staying alive versus riskier treatments aimed at cures today. It also gives reasonable hope to parents considering abortion based on predictions that a disabled child will have lifelong suffering or debility.
6. Don’t go overboard on environmentalism. AGI or not, we all have an obligation to care for the earth as our shared home. Certainly be mindful of how your habits contribute to pollution, carbon emissions, and natural resource degradation. But AGI will give us much, much better tools for fighting climate change and healing the planet in the 2030s and 2040s than we have today. If you can give up a dollar worth of happiness to help the environment either today or a decade from now, that dollar will go a lot farther later. So be responsible, but don’t anguish over every plastic straw. Don’t sacrifice time with your family by taking slower public transport to trim your CO2 impact. Don’t risk dehydration or heat stroke to avoid bottled water. Don’t eat spoiled food to cut waste. And probably don’t risk biking through heavy traffic just to shrink your carbon footprint.
7. Wean your brain off quick dopamine. Social media is already rewiring our brains to demand constant and varied hits of digital stimulation to keep our dopamine up. AGI will make it even easier than today to get those quick hits—for example, via smart glasses that beam like-notifications straight into our eyes. If you’re a slave to these short-term rewards, even an objectively amazing future will be wasted on you. Now is the time to seek sources of fulfillment that can’t be instantly gratified. The more joy you find in “slow” activities—like hiking, tennis, reading, writing, cooking, painting, gardening, making models, cuddling animals, or having great conversations—the easier it will be to consume AGI without letting it consume you.
8. Prioritize time with elders. We know that our years with grandparents and other elders are limited, but the implicit pressure of our own mortality often pushes us to skip time with them in favor of other things that feel fleeting—job interviews, concerts, dates. If you expected to live to a healthy 200 due to longevity medicine, but knew that most people now in their 80s and 90s wouldn’t live long enough to benefit, you’d probably prioritize your relationships with them more than you do now. There’ll be plenty of time to hike the Andes later, but every moment with the people who lived through World War II is precious.[1]
9. Rethink privacy. There’s an enormous amount of data being recorded about you that today’s AI isn’t smart enough to analyze, but AGI will be. Assume anything you do in public today will someday be known by the government, and possibly by your friends and family. If you’re cheating on your spouse in 2026, the AGI of 2031 might scour social media data with facial recognition and find you and your paramour necking in the background of a Korean blogger’s food review livestream. It would be like what happened to the Astronomer CEO at the Coldplay concert last year, except for anyone in the crowd—no need to wind up on the jumbotron. And not only with facial recognition. The vein patterns under our skin are roughly as uniquely identifying as fingerprints, and can often be recovered from photos or video that show exposed skin, even if not obvious to the naked eye. So if you’re doing something you don’t want the government to tag you with, don’t assume you can stay anonymous on camera as long as your face isn’t visible.
10. Foster human relationships. When AGI can perform all the cognitive tasks humans can, the jobs most resistant to automation will largely revolve around human relationships. The premium will grow on knowing many people, and being both liked and trusted by them. Although it’s hard to predict exactly how automation will unfold, honing your people skills and growing your social circles are wise investments. But human relationships are also central to life itself. Even if AGI gives you material abundance without work, such as via some form of universal basic income, human relationships are essential to the experience of life itself. If you are socially isolated, AGI will give you endless entertainments and conveniences that deepen your isolation. But if you build a strong human community, AGI will empower you to share more enriching experiences together and come to know one other more fully.
11. Grow in virtue. In the ancient and medieval worlds, physical strength was of great socioeconomic importance because it was essential to working and fighting. Gunpowder and the Industrial Revolution changed all that, making strength largely irrelevant. In the modern world, intellect and skill are hugely important to both socioeconomic status and our own sense of self-worth. We’re proud of being good at math or history or computer programming. But when AGI arrives, everyone will have access to superhuman intelligence and capability, cheaper than you can imagine. In that world, what will set humans apart is virtue—being kind, being wise, being trustworthy. Fortunately, virtues can be cultivated with diligent effort, like training a muscle. The world’s religious and philosophical traditions have discovered numerous practices for doing this: volunteering and acts of service, meditation or prayer, fasting and disciplined habits, expressing gratitude, listening humbly to criticism, forming authentic relationships with people of different backgrounds, studying the lives of heroically virtuous people, and many more. Explore those practices, commit to them, and grow in virtue.
Prioritizing time with elders can potentially conflict with taking the Italy trip. I suspect that most people can increase their priority on both without trading one off against the other directly. For example, if someone saves less money this year and takes 10 days more vacation than they otherwise would have, they can spend 5 days in Rome and 5 more days visiting their grandparents. But where the two must conflict, I would generally favor time with elders, because that is truly irreplaceable.