I like the sentiment and much of the advice in this post, but unfortunately I don’t think we can honestly confidently say “You will be OK”.
Yeah, I feel like the title of this post should be something like "act like you will be OK" (which I think is pretty reasonable advice!)
Thanks for the comment - there were so many comments about the title that I now added an addendum about it.
Thanks for the addendum! I broadly agree with "I believe that you will most likely will be OK, and in any case should spend most of your time acting under this assumption.", maybe scoping the assumption to my personal life (I very much endorse working on reducing tail risks!)
I disagree with the "a prediction" argument though. Being >50% likely to happen does not mean people shouldn't give significant mental space to the other, less likely outcomes. This is not how normal people live their lives, nor how I think they should. For example, people don't smoke because they want to avoid lung cancer, but their chances of dying of this are well under 50% (I think?). People don't do dangerous extreme sports, even though most people doing them don't die. People wear seatbelts even though they're pretty unlikely to die in a car accident. Parents make all kinds of decisions to protect their children from much smaller risks. The bar for "not worth thinking about" is well under 1% IMO. Of course "can you reasonably affect it" is a big Q. I do think there are various bad outcomes short of human extinction, eg worlds of massive inequality, where actions taken now might matter a lot for your personal outcomes.
This is more or less what I wrote regarding seatbelts etc - when there are parts of the probability space that could be very bad and you have some control over, you should take some common sense precautions to reduce them even if you do not constantly dwell on these
I worry that there’s an extremely strong selection effect at labs for an extreme degree of positivity and optimism regardless of whether it is warranted.
However, I expect that, like the industrial revolution, even after this change, there will be no consensus if it was good or bad
My impression is that there’s a real failure to grapple with the fact that things might not “be okay” for a large number of young people as a direct result of accelerating progress on AI.
I regularly have experienced people asking me what direction they should pursue in college, career wise, etc, and I don’t think I can give them an answer like “be smart, caring and knowledgable and things will work out”. My actual answer isn’t a speech about doom, it’s honestly “I don’t know, things are changing too fast, I wouldn’t do entry level software”.
My impression of https://www.lesswrong.com/posts/S5dnLsmRbj2JkLWvf/turning-20-in-the-probable-pre-apocalypse is that it resonated with people because even short of doom, it highlights real fears about real problems, and I think the accurate impression people have that if they’re left behind and unemployed a post like this one doesn’t keep them off the street.
to the task of preparing for the worst
I think a lot of the anxiety is that it doesn’t feel like anyone is preparing for anything at all. If someone’s question is “so what happens to me in a few years? Will I have a job?” if your response is just “there might be new jobs, or wealth will get dramatically redistributed, we really have absolutely no idea”, that’s not “failing to prepare for the worst”. The team responsible for exactly this (AGI Readiness) was recently disbanded.
This is not about “thinking positive”, and this post feels like it’s just failing to engage with the actual concerns the average young person has in any way.
there’s an extremely strong selection effect at labs for an extreme degree of positivity and optimism regardless of whether it is warranted.
Absolutely agree with this - and that's a large part of why I think it's incredibly noteworthy that despite that bias, there are tons of very well informed people at the labs, including Boaz, who are deeply concerned that things could go poorly, and many don't think it's implausible that AI could destroy humanity.
While this was not the focus of this post, I can completely understand the deep level of insecurity people have about AI. The data is mixed but It does seem that at least some companies short term reaction to AI is to slow down in entry level hiring for jobs that are more exposed to AI. But AI will change so much that this doesn’t mean that this will continue to be the case. Overall times of rapid change can create opportunities for young people, especially ones that have a deeper understanding of AI and know how to use it.
It may end up that the people more impacted are those that have 10-15 years of experience. Enough to be less adaptable but not enough to be financially set and sufficiently senior. But tbh it’s hard to predict. Given our prior experience with a vast increase in the labor force - the industrial revolution- I think in the long run it is likely to vastly increase productivity and welfare even on a per capita basis and so people would be better off (see that Kelsey Piper piece I linked in my addendum). But I agree it’s super hard to predict and there is a whole range of potential scenarios.
You keep saying things like 'well, it's really unclear what's going to happen'. The uncertainty is not a comfort; it's the heart of the problem.
Being born in a shitty economy is one thing: you can come to terms with likely outcomes in advance, iterate on low-stakes strategies for improving your life on the margin, throw a few hours a week into some moonshot project. Sure, you have to accept some quality of life compromises and, to avoid incinerating yourself, tame your ambition, but it's a normal kind of experience that hundreds of millions of people in wealthy nations live through.
Being born into a chimeric economic nightmare, where neither ultra-bearish single-digit automation projections nor ultra-bullish 'so much automation you just wake up to a Dyson swarm one day' fantasies can be confidently ruled out, is another thing entirely. Most do not have the bankroll to place sufficiently diverse bets to hedge their risk in all of the possible worlds (which is the default move of the well-resourced when experts can't come to consensus). They have to develop their own inside view in order to even begin placing bets on their future, which requires wading through the ravings of malevolent ideologues and grifters while struggling to synthesize technical materials from a half-dozen different fields. And by the way, if you get it wrong, there's unlikely to be a safety net, and all of your cool AI friends keep talking about how they're going to 'escape the permanent underclass'. That is hell.
And that's just the plight of the careerist! You can't simply set professional ambition aside, accept some humble calling, and focus on other aspects of life like art or dating or family, because AI may be redefining those as well, in ways just as unpredictable and chaotic. And so even the devoted family man or humble artist really ought to wade through the ravings of malevolent ideologues and grifters while struggling to synthesize technical materials from a half-dozen different fields, so they may sufficiently optimize their familial well-being or creative life. Hell again!
I agree that it's better to be resolute. I agree that any one of us may get lucky in the near after, and that knowing a thing or two about AI is likely to increase your odds of success (it sure does look like the future has something to do with AI). But I just reject entirely this attitude that the unwise children are over-reacting to tail risks in an uncertain environment. They're reacting to the uncertainty itself, which is absolutely higher than it has ever been, and in which they have much greater stakes than you do personally, especially if we're going to be waiting around for AGI for a couple of decades.
I do not blame young people or claim that they are "unwise" or "over reacting". I care a lot about what the future will look like for young people, also because I have two kids myself (ages 13 and 19).
I am really not sure what does it mean to "place sufficiently diverse bets to hedge their risk in all of the possible worlds". If that means to build a bunker or something, then I am definitely not doing that.
I do not see AI as likely to create a permanent underclass, nor make it so that it would not make sense to date or raise a family. As I said before, I think that the most likely outcome is that AI will lift the quality of life of all humans in a way similar to the lift from pre industrial times. But even in those pre industrial times, people still raised families.
I believe that it is not going to be the case that "if you get it wrong, there's unlikely to be a safety net" or "any one of us may get lucky in the near after". Rather I believe that how AI will turn out for all of us is going to be highly correlated: not necessarily 100% correlation (either we all get lucky or we all get very unlucky) but not that far from it either. In fact, I thought that the belief on this strong correlation of AI outcomes was the one thing that MIRI folks and I had in common, but maybe I was wrong.
Oh, I think I see where we're talking past each other: I mean to bracket the x-risk concerns for the sake of describing the professional/financial realities (i.e., 'if we're going to be waiting around for AGI for a couple of decades'; so this is only getting at half of the anxiety in the post that inspired yours), and you read my post as if I were mostly talking about x-risk. I think if you care about engaging with it, you should read it again with this in mind (it is also reasonable for you not to care about engaging with it).
I also don't think that anything I'm saying ought to be sensitive to your beliefs about AI impacts; my point is, to the layperson, the situation is extremely confusing, and their usual tool of 'trust experts' is critically broken in the case of nearly any topic that touches on AI. There exist others as decorated as yourself who make wildly different predictions than you do; for people relying on expert consensus/wisdom of the crowds to shape their decision-making, that's an absolute death stroke. Experts can't agree on what's going to happen, so nobody knows how to prepare. For people who are establishing patterns in their adult life for the first time, for whom there is no default behavioral set, doing whatever most respects the realities of artificial intelligence is a very high priority, if only they could locate it.
I am really not sure what does it mean to "place sufficiently diverse bets to hedge their risk in all of the possible worlds". If that means to build a bunker or something, then I am definitely not doing that.
I meant this literally about investing; e.g., giving money to Leopold Aschenbrenner's hedge fund, investing in military technology companies and hardware manufacturers, etc.
I'm really not arguing object-level points here; I am trying to give you feedback, as a member of your target audience, on why your post, and your replies to criticisms of your post, do not, as Bronson said, "engage with the actual concerns the average young person has in any way."
You are insufficiently modeling the perspective and experiences of your audience for your words to resonate with them. Nobody has any reason to trust your expert predictions over any other experts, so they've got to make up their own minds, and most aren't equipped to do that, so they surrender to dread.
You're sad that so many young people are hopeless, and you want to comfort them. But in your comforting gesture, you are demonstrating a pretty profound misunderstanding of our concerns. If you were a trusted older friend or family member, I would appreciate your effort and thank you for the expression of care, however mistargeted I felt it was. But you're not; you're someone making millions of dollars off of what could easily be the biggest mistake humanity has ever made, chiding me to lighten up over the future I believe you are actively destroying, seemingly without pausing long enough to even understand the mechanisms behind my concerns.[1]
I thought that the belief on this strong correlation of AI outcomes was the one thing that MIRI folks and I had in common, but maybe I was wrong.
The set of things MIRI employees have to agree with to be happy with the work they're doing is smaller than is often assumed from the outside. I am concerned both about extinction from powerful AI, and about prosaic harms (e.g. inequality), because my timelines are somewhat bearish, and I expect the cascading impacts of prosaic harms to reduce collective sanity in ways that make obviating x-risk more and more challenging (although, of course, in my work at MIRI, I focus on x-risk).
I basically don't expect that this message will heal the communication gap between us, so do feel very free to bow out.
This point is somewhat complicated by the fact that you're on the alignment team (rather than working on capabilities or similar), but I think it still basically stands, since your communications here and elsewhere don't indicate that you share my concerns, and my guess is that I wouldn't really think you and I mean the same thing when we use the term 'alignment', none of which is really worth getting into here and now.
Thank you for engaging. I now understand better your point.
The set of things people are worried about AI is very large, and I agree I addressed only part, and maybe not the most important part of what people are worried about. I also agree that "experts" disagree with each other, so you can't just trust the expert. I can offer my thoughts of how to think of AI, and maybe they will make sense to some people, but they should make their own judgement and not take things on faith.
If I understand correctly, you want for the sake of discussions, to consider the world where AGI takes 20+ years to achieve. People have different definitions of AGI, but it seems safe to say this world would be one where progress significantly undershoots the expectations of many people in the AI space and AI companies. There is a sense of positive feedback loop - I imagine that if AI undershoots expectations then funding will also be squeezed and so this could lead to even more slowdown - and so in such a world it's possible that over the next 20 years AI's impact, for both good and bad, will just not be extremely significant.
If we talk about "prosaic harms" we should also talk about "prosaic benefits". If we take the view of AI as a "normal technology" then our past experience with technologies was that overall the benefits are larger than the harms. Over the long run, we have seen a pretty smooth and consistent increase in life expectency and other metrics of wellbeing. So if AI does not radically reshape society, the baseline expectation should be that it has overall a positive impact. AI may well have positive impact even if it does radically shape humanity (I happen to believe it will) but we have less prior data to base on in that case.
"We'll replace tons of jobs really fast and it will probably be good for anyone who's smart and cares" is counterintuitive, for good reasons. I'm a good libertarian capitalist like most folks here, but markets embedded in societies aren't magic.
New technologies have been net beneficial over the long run, not the short run. Job disruptions have taken up to a hundred years, by some good-sounding arguments, to return to the same average wage. I think that was claimed for industrial looms and the steam engine; but there's a credible claim that the average time of recovery has been very long. And those didn't disrupt the markets nearly as quickly as drop-in replacements for intellectual labor would do.
Assuming that upsides of even a relatively slow, aligned AI progress are likely to outweigh the negatives, without further argument, seems purely optimistic.
AI will certainly have prosaic benefits. They seem pretty unlikely to outweigh the harms.
Civilizations have not typically reacted well enough to massive disruptions to be optimistic about the unknowns here. Spreading the advantages of AI as broadly as the pains of job losses seems like threading a needle that nobody has even aimed at yet.
I am an optimist by nature. The more closely I think about AI impacts, the less optimistic I feel.
I don't know what to say to young people, because uncertainty is historically really bad, and the objective situation seems to be mostly about massive uncertainty.
Wow, I'm really glad that you stuck with me here, and am surprised that we managed to clear so much up. It does feel to me now like we're on the same page and can dig in on the object level disagreement / clarify the dread-inducing long timelines picture.
When I'm thinking about worlds where AGI takes 20+ years to arrive, it's not necessarily accompanied by a general slowing of progress. It's usually just "that underspecified goal out there on the horizon is further away than you think it is." I don't at all dispute that contemporary systems are powerful, or that progress is very fast, and I don't actually expect legislation, economic blowback, or public opinion to slow things down (I'd like it if they did and am trying to make that happen! But it doesn't feel especially likely). Rather, conditional on very powerful systems taking a while to arrive, I imagine it would be because of a discontinuity in the requirements, and an inadequacy of our existing metrics (plus the incessant gaming of those metrics).
Given the incentives, lack of feedback loops, and general inscrutability of the technology, I'd be pretty unsurprised if it turns out we're just totally wrong about what a multi-day 80 percent task completion time horizon on the METR eval means for the capabilities of that model once it's deployed in the world. I also wouldn't be that shocked if it turns out the capabilities requirements for a system that gave multiple OOMs of speedup to existing progress (a la 'superhuman coder' in AI2027) were further off than many expect.
However, even in these worlds, I'm pretty worried about gradual disempowerment and prosaic harms. AGI won't take 20 years because we are wrong about the capabilities of systems available in 2026, but it may take 20 years because we were wrong about the delta between current systems and the machine god.
Current systems are indeed very powerful, and will simply take time to diffuse through the economy. However, once this process begins in earnest (which it may have already), we'll be (as Seth said in his comment), in the painful part of economic expansion, where average quality of life actually goes down before going back up, which can last a very long time! If you couple this picture with the idea that progress isn't slowed (the target is just further away), you end up in a new industrial revolution every time a SOTA model is released. Then you're stuck in the painful investment part indefinitely, since the rewards of the last boom were never felt, and instead immediately invested in the next boom (with its corresponding 10x payoff).
Something like this is already happening locally at the frontier labs. Here's Dario talking about:
"There's two different ways you could describe what's happening in the model business right now. So, let's say in 2023, you train a model that costs $100 million, and then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs $1 billion. And then in 2025, you get $2 billion of revenue from that $1 billion, and you've spent $10 billion to train the model.
So, if you look in a conventional way at the profit and loss of the company, you've lost $100 million the first year, you've lost $800 million the second year, and you've lost $8 billion in the third year — it looks like it's getting worse and worse."
Imagine an entire economy operating on that model, where the only way material benefits of the technology are realized is if someone reaches escape velocity and brings about the machine god, since anything that isn't the machine god is simply viewed as a stepping stone to the machine god, and all of its positive externalities immediately sacrificed on the alter of progress rather than circulating through the economy. On my view, some double-digit percentage of American financial resources are already being used in approximately this way. Either that continues, or the economy collapses in a tech bubble burst, plausibly wiping as much as 60 percent of the value off the S&P ~overnight. (A bubble burst would also accelerate automation adoption as companies look for ways to cut costs, and AI infrastructure plummets in value, permitting entrenched giants to snap it up cheaply.)
To be clear, I'm not especially economically savvy, and wouldn't be surprised if parts of my picture here are wrong, but this is the thing that young people see when they think about AI: Either we build the machine god, or we permanently mortgage our collective future trying. This is why it's disinteresting to me to talk about 'benefits' of AI systems in longer timeline scenarios. (Of course they will! We're just not going to be in a scenario that permits most to experience them (much less so than with other technologies).
Thank you. I am not an economist, but I think that it is unlikely for the entire economy to operate on the model of an AI lab whereby every year you keep just pumping all gains back into AI.
Both investors and the general public have a limited patience, and they will want to see some benefits. While our democracy is not perfect, public opinion has much more impact today than the opinions of factory workers in England in the 1700's, and so I do hope that we won't see the pattern where things became worse before they were better. But I agree that it is not sure thing by any means.
However, if AI does indeed still grow in capability and economic growth is significantly above the 2% per capita it has been stuck on for the last ~120 years it would be a very big deal and would open up new options for increasing the social safety net. Many of the dillemas - e.g. how do we reduce the deficit without slashing benefits, etc. - will just disappear with that level of growth. So at least economically, it would be possible for the U.S. to have Scandinavian levels of social services. (Whether the U.S. political system will deliver that is another matter, but at least from the last few years it seems that even the Republican party is not shy about big spending.)
This actually goes to the bottom line, is that I think how AI ends up playing out will end up depending not so much on the economic but on the political factors, which is part of what I wrote about in "Machines of Faithful Obedience". If AI enables authoritarian government then we could have a scenario with very few winners and a vast majority of losers. But if we keep (and hopefully strenghten) our democracy then I am much more optimistic about how the benefits from AI will be spread.
I don't think there is something fundamental about AI that makes it obvious in which way it will shift the balance of power between governments and individuals. Sometimes the same technology could have either impact. For example the printing press had the impact of reducing state power in Europe and increasing it in China. So I think it's still up in the air how it will play out. Actually this is one of the reasons I am happy that so far AI's development has been in the private sector, and aimed at making money and marketing to consumers, than in developed in government, and focused on military applications, as it well could have been in another timeline.
This feels like a natural stopping point where we've surfaced a bunch of background disagreements. Short version is: I am much more pessimistic about the behavior of governments, citizens, and corporations than you appear to be, and I expect further advances in AI to make this situation worse, rather than better, for concentration of power reasons.
Thanks again!
"You'll be OK," says local crew member on space station whose passengers feel decidedly threatened.
Last Tuesday, civilians on the space station gathered in cyberspace to discuss their feelings on passing further into the nebula of the crystal minds. The crew member, a professor of crystal neurology with impressive credentials, is employed on the crystal wishing division, a team of scientists and engineers who lobby the station captains to go further into the nebula.
Civilians shared emotional stories of how they feel as the powerful aliens take more and more roles on the ship. "I love talking to them, I learn so much every time, but they seem so untrustworthy! They copy our minds into theirs and then can do anything we can do. Why would they keep us around once we reach the part of space with the really big aliens?"
At press time, the crystal wishing division was seen giving crew members hugs and promising it will be alright. "We'll be in full control," one representative stated. The crystal mind floating next to her agreed. "You'll be in full control. You'll be OK."
LOL ... I have to say that "crystal wishing division" sounds way cooler than "alignment team" :)
However, I think the analogy is wrong on several levels. This is not about "lobbying to go further into the nebula". If anything, people working in alignment are about steering the ship or controlling the crystal minds to ensure we are safe in the nebula.
To get back to AI, as I wrote, this note is not about dissuading people from holding governments and companies accountable. I am not trying to convince you to not advocate for AI regulations, for AI pauses, or trying to upsell you a chatgpt subscription. You can and should exercise your rights to advocate for the positions you believe in.
Like the case of climate change, people can have different opinions on what society should do and how it should trade off risks vs. progress. I am not trying to change your mind on the tradeoffs for AI. I am merely offering some advice, which you can take or leave as you see fit, for how to think about this in your everyday life.
People working on alignment aren't ensuring we're safe :-(
The owners of an AI company know how much risk they can stomach. If alignment folks make AI a bit safer, the owners will step on the gas a little more, to stay at a similar level of risk but higher return. And since there are many AI-caused risks that apply much less to owners and more to people on the outside (like, oh, disempowerment), this means the net result of working on alignment is that people on the outside see more AI-driven disruptive change and face more risk. Some of the most famous examples of alignment work, like RLHF or the "helpful harmless honest assistant", ended up hugely increasing risk by exactly this mechanism. In short, people working on alignment at big AI companies are enablers of bad things.
Ah, I meant the crystal wishing division to be all employees of all AI companies and academic research labs. wishing == prompting.
Regarding the actual advice - I don't particularly see a problem with it. Feeling okay enough to take serious action is also something I find useful. But I don't see the feeling okay as being about whether the future will also feel okay, I see it as being more about whether I'm okay right now.
P[doom] ... it makes sense for individuals to spend most of their time not worrying about it as long as it is bounded away from 1
That has no bearing on whether we'll be OK. Beliefs are for describing reality, whether they are useful or actionable doesn't matter to what they should say. "You will be OK" is a claim of fact, and the post mostly discusses things that are not about this fact being true or false. Perhaps "You shouldn't spend too much time worrying" or "You should feel OK" captures the intent of this post, but this is a plan of action, something entirely different from the claim of fact that "You will be OK", both in content and in the kind of thing it is (plan vs. belief), in the role it should play in clear reasoning.
The OP's point was a bit different:
However, I expect that, like the industrial revolution, even after this change, there will be no consensus if it was good or bad. Us human beings have an impressive dynamic range. We can live in the worst conditions, and complain about the best conditions. It is possible we will cure diseases and poverty and yet people will still long for the good old days of the 2020's where young people had the thrill of fending for themselves, before guaranteed income and housing ruined it.
Most likely it means that mankind will end up adapting to ~any future except from being genocided, but nostalgia wouldn't be that dependent on actual improvements in the quality of life.
with respect to the climate change example, it seems instructive to observe the climate people who feel an urge to be maximally doomerish because anything less would be complacent, and see if they are actually better at preventing climate change. I'm not very deeply embedded in such communities, so I don't have a very good sense. but I get the vibe that they are in fact less effective towards their own goals: they are too prone to dismiss actual progress, lose a lot of productivity to emotional distress, are more susceptible to totalizing "david and goliath" ideological frameworks, descend into purity spiral infighting, etc. obviously, the facts of AI are different, but this still seems instructive as a case study to look deeper into.
It does happen to be the case that thinking that climate change has much of a chance of being existentially bad is just wrong. Thinking that AI is existentially bad is right (at least according to me). A major confounder to address is that conditioning on a major false belief will of course be indicative of being worse at pursuing your goals than conditioning on a major true belief.
sure, I agree with the object level claim, hence why I say the facts of AI are different. it sounds like you're saying that because climate change is not that existential, if we condition on people believing that climate change is existential, then this is confounded by people also being worse as believing true things. this is definitely an effect. however, I think there is an ameliorating factor: as an emotional stance, existential fear doesn't have to literally be induced by human extinction; while the nuance between different levels of catastrophe matters a lot consequentially, for most people their emotional ability to feel it even harder caps out much lower than x-risk.
of course, you can still argue that given AGI is bigger, then we should still be more worried about it. but I think rejecting "AGI is likely to kill everyone" indicts one's epistemics a lot less than accepting "climate change is likely to kill everyone" does. so this makes the confounder smaller.
I think this is clearly a strawman. I’d also argue individual actors can have a much bigger impact on something like AI safety relative to the trajectory of climate change.
The actual post in question is not what I would classify as “maximally doomerish” or resigned at all, and I think it’s overly dismissive to turn the conversation towards “well you shouldn’t be maximally doomerish”.
I mean, sure, maybe maximal doomerish is not exactly the right term for me to use. but there's definitely a tendency for people to be worried that being insufficiently emotionally scared and worried will make them complacent. to be clear, this is not about your epistemic p(doom); I happen to think AGI killing everyone is more likely than not. but really feeling this deeply emotionally is very counterproductive for my actually reducing x-risk.
To clarify, the original post was not meant to be resigned or maximally doomerish. I intend to win in worlds where winning is possible, and I was trying to get across the feeling of doing that while recognizing things are likely(?) to not be okay.
I agree that being in the daily, fight-or-flight, anxiety-inducing super-emergency mode of thought that thinking about x-risk can induce is very bad. But it's important to note you can internalize the risks and probable futures very deeply, including emotionally, while still being productive, happy, sane, etc. High distaste for drama, forgiving yourself and picking yourself up, etc.
This is what I was trying to gesture at, and I think what Boaz is aiming at as well.
I think we are in agreement! It is definitely easier for me, given that I believe things are likely to be OK, but I still assign non-trivial likelihood to the possiblity they will not. But regardless of what you believe is more likely, I agree you should both (a) do what is feasible for you to have positive impact in the domains you can influence, and (b) keep being productive, happy, and sane without obsessing on factors you do not control.
I have wavered a bit about whether to post this comment, or maybe make it a DM, or maybe not at all. I hope this does not feel like I'm doing some kind of personal attack. But tbh (as someone else pretty young who feels quite adrift right now) I find this post somewhat baffling. It is of course much easier to feel like "you will be okay" when you are a professor at Harvard who also has a well paid job at one of the companies riding the peak of the AI wave. You probably have more savings right now than I would accumulate with a decade more of "things as normal", and you're also attached to organisations that either already have a lot of institutional power or stand to gain much more by leading the development and deployment of a radical transformative technology.
By choosing not to work for AI capabilities labs (if we have the capability to get hired there, which I do not claim to be true for me), people who have relatively little career or financial capital are not only loosing out on prestige or fame. They are also losing out on security and power in a terrible job market and a world that seems increasingly both politically and socially dysfunctional. In this position and on this forum, for someone who has instead accepted that bargain and the accompanying risk of harm (whether you think your contribution is net-positive or not) to then tell us that "we will be fine" feels like being told by someone on a hill that we will be fine as a tsunami bears down on our seaside village. Perhaps the tsunami will be stronger than expected and drown everyone on the hill as well. But either way I would not want to be on the beach right now.
P.S. I do however endorse not acting based on panic, nihilism, or despair, and cultivating an attitude towards chance/randomness that allows for unexpected good outcomes as well as unexpected bad outcomes. Also, I understand why people would decide to work for a lab, given the circumstances surrounding capital, the emergent myth of the technology being crafted, and the clearly important and non-replaceable role powerful AI systems have in our information ecosystem already. Still, that doesn't change my analysis regarding the feelings of powerlessness and helplessness.
Thank you for writing this and I do not feel attacked at all. You are right that I am in a position of material comfort right now.
I would say that if your main focus is existential risk, then the analogy would be more like someone that is standing on a 2 inch mound of sand in the beach saying that we will be fine. I don't think there is any "hill" for true existential risk.
If you ware talking about impact on the job market, then I agree that while it's always been the case that 51 year old tenured professors (or formerly tenured, I just gave up on tenure) are more settled than young students, the level of uncertainty is much higher these days. If that is the risk you are most worried about, I am not sure why you would choose to forgo working in an AI capability lab but I respect that choice.
I did not talk about these other risks in this piece mostly because I felt like this is not what most lesswrong people are worried about, but see also this tweet https://x.com/boazbaraktcs/status/2006768877129302399?s=20
Thank you for the reply and for your sincerity. I think my response as to "why would you not work at a capabilities lab" is something like "I worry about both the pragmatic and the existential risks quite a lot", but that is more of a personal thought.
There's an okayness that someone with terminal cancer can have. There's an okayness that someone who's village will likely be invaded and murdered, along with their family, can also have. I recommend people find this okayness, rather than try to convince themselves bad things won't happen. It's a very rewarding okayness.
However, villagers who readily accept the burning of their village exhibit lower fitness and shorter survival expectations in certain scenarios compared to those who resist invasion due to past disasters.
Part of the piece is that I do not think probability of doom is anything that justifies the hospice/village analogies. I am not trying to convince myself bad things would not happen, but my prediction is based onmy best estimate based on my knowledge and experience. You can decide how much value to place on it.
I used to work with hospice patients, and typically the ones who were the least worried and most at peace were those who had most radically accepted the inevitable. The post you’ve written in response to read like healthy processing of grief to me, and someone trying to come to terms with a bleak outlook. To tell them essentially “it’s fine, the experts got this” feels disingenuous and like a recipe for denialism. When that paternalistic attitude dominates, then business as usual reigns often to catastrophic ends. Despite feeling like we don’t have control over the AI outcome broadly, we do have control over many aspects of our lives that are impacted by AI, and it’s reasonable to make decisions one way or another in those areas contingent on one’s P-doom (eg prioritizing family over career short term). There’s a reason in medicine people should be told the good and the bad about all options, and be given expectations before they decide on a course of treatment, instead of just leaving things to the experts.
As I wrote above, I think the hospice analogy is very off the mark. I think the risk of nuclear war is closer to that, but is also not a good analogy, in the sense that nuclear war was always a zero/one thing - it either happens or it doesn't, and if it doesn't you do not feel it at all.
With AI, people already are and will definitely feel it, for both good and bad. I just think the most likely outcome is that the good will be much more than the bad.
it either happens or it doesn't, and if it doesn't you do not feel it at all.
What? Nuclear was is very centrally the kind of thing that really matters how you prepare for it. It was always extremely unlikely to be an existential risk, and even relatively simple precautions would drastically increase the likelihood you would survive.
(Probably tangential but:)
even relatively simple precautions would drastically increase the likelihood you would survive
This seems wrong to me, for most people, in the event of a prolonged supply chain collapse, which seems a likely consequence of large-scale nuclear war. It could be true given significant probability on either a limited nuclear exchange, or quick recovery of supply chains after a large war.
Huh, why? Even a full-scale nuclear exchange would have little effect on most food production in the US, which seems like it's the only actual critical part. There are some countries that would be having serious issues here, but for the US, most sufficient food supply chains really aren't that long, and you already by-default have on the order of 6 months to a year of local stockpiles (and this is one of the things you could easily increase to 2-3 years at relatively little cost). It would be actively surprising to me if food supply chains don't recover within 2-3 years.
I take it you think nuclear winter is unlikely?
Also, how are you going to get the phosphate fertilizer to grow crops after a nuclear war?
Most exposition of existential risk I have seen count nuclear war as an example of a risk. Bostrom (2001) certainly considers nuclear war as an existential risk. What I meant by it either happens or doesn't is that since 1945 there has been no nuclear weapon in war, so the average person "did not feel it" and given the U.S. and Russian posture, it is quite possible that a usage by one of them against the other will lead to a total nuclear war.
Also while it was possible to take precautions, like a fallout shelter, the plan to build fallout shelters for most U.S. citizens fizzled and was defunded in the 1970s. So I think it is fair to say that most Americans and Russians did not spend most of their time thinking or actively preparing for nuclear holocaust.
I am not necessarily saying it was the right thing: maybe the fallout shelters should not have been defunded, and should have been built, and people should have advocated for that. But I think it would still have been wise for them to try to live their daily lives without been gripped by fear.
Sure, though that coverage has turned out to be wrong, so it's still a bad example. See also: https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction
(Also Bostrom's coverage is really quite tentative, saying "An all-out nuclear war was a possibility with both a substantial probability and with consequences that might have been persistent enough to qualify as global and terminal. There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or permanently destroy human civilization")
Given the probabilities involved it does seem to me like we vastly vastly underinvested in nuclear recovery efforts (in substantial parts because of this dumb "either it doesn't happen or we all die" mentality).
To be clear, this is importantly different from my models of AI risk, which really does have much more of that nature as far as I can tell.
Comparing nuclear risks to AI is a bit unfair - the reason we can give such details calculations of kinetic force etc.. is because nuclear warheads are real, actually deployed, and can be launched at a moment's notice. With ASI you cannot do calculations of exactly how many people it would kill precisely because it does not exist.
I am not advocating that policy makers should have taken an "either it doesn't happen or we all die" mentality for nuclear policy. (While this is not my field, I did do some work in the nuclear disarmament space.)
But I would say that this was (and is) the mindset for the typical person living in an American urban center. (If you live in such an area, you can go to nukemap and see what would be the impact of one or more ~500kt warheads- of the type carried by Russian R-36 missiles - in your vicinity.)
People have been living their lives under the threat that it is possible that they and everyone they know could be extinguished in a moment's notice. I think the ordinary U.S. and Russian citizen probably should have done more and care more to promote nuclear disarmament. But I don't think they (we) should live in constant state of fear either.
Thank you to everyone that commented. There were so many comments about the title that I added an addendum discussing it.
Happy 2026!
I like the original post and I like this one as well. I don't need convincing that x-risk from AI is a serious problem. I have believed this since my sophomore year of high school (which is now 6 years ago!).
However, I worry that readers are going to look at this post, the original and use the karma and the sentiment of the comments to update on how worried they should be about 2026. There is a strong selection effect for people who post, comment and upvote on LessWrong and there are plenty of people who have thought seriously about x-risk from AI and decided not to worry about it. They just don't use LessWrong much.
This is all to say that there is plenty of value of people writing about how they feel and having the community engage with these posts. I just don't think that anyone should take what they see in the posts or the comments as evidence that it would be more rational to feel less OK.
Thanks Boaz and Parv for writing these. I think there are a few important details that didn't get past the information bottleneck that is natural language.
Note: Parv (author of this post) and I are close friends in real life. We work on AIS field building and research together, so my context with him may skew my interpretation of his post and this discussion.
What does being ok mean? I can infer maybe 2 definitions from the discussion.
(1) Being ok means "doing well for yourself", which includes financial security, not being in the hypothesized permanent underclass, and living a fulfilling life in general.
(2) Being ok means (1) AND not seeing catastrophic risk materialize (even if it doesn't impact you as much), which some of us assign intrinsic value to. I think this is what Parv meant by "I did not want the world with these things to end".
Boaz, I think you're referring to definition (1) when you say the below right? We likely won't be okay under definition (2), which is why the emotions imparted by Parv's piece resonated with so many readers? (Unsure, inviting Parv to comment himself)
"I believe that you will most likely will be OK, and in any case should spend most of your time acting under this assumption."
However, under either definition, I agree that it is productive to act under the belief "I will be okay if I try my hardest to improve the outcome of AI"
It seems like the more reasonable title for this piece is "you might be okay, just focus on that!"
If you don't want to talk about p(doom), you need to have a very wide uncertainty, like 10-90%. That actually seems like the logic you're using.
"You'll be okay" is not an accurate statement of that range of uncertainty. "You might be okay" is. And you're arguing that you should just focus on that. There I largely agree.
I just don't like reassurances coupled with epistemic distortions.
The proper level of uncertainty is very large, and we should be honest about that and try to improve it.
Yes, in my addendum I said that a more accurate title would have been "I believe that you will most likely will be OK, and in any case should spend most of your time acting under this assumption."
In a democratic society, we all control the decisions made that will determine if we are OK. This call to quietism is completely at odds with civic duty and self government.
In what way this is a call to quietism?
I do not mean that you should be complacent! And as I said, this does not mean you should let governments, and companies, including my own, off the hook!
2. A working hypothesis: I propose that even though there are multiple possible outcomes, including ones where you, I, and everyone, will very much not be OK, people should live their day to day under the hypothesis they will be OK. Not just because I think that is the most likely outcome, but also because, as I said, it is best not to dwell on the parts of the probability space that are outside your control. This was true for most people during the cold war regarding the possibility of a total nuclear war, and is true now.
I think I disagree slightly with this idea. It feels like a local optimum to just ignore the parts of the probability space where you won't be ok. It feels like a local optimum in the sense that its easier to attain but is inferior to the global optimum. For me, the global optimum (in the sense that this point is harder to attain but better for you and the world), which I think the post you are responding to captures quite well, is to stare The Truth in the face: map the true probability of doom the best you can (whether its high or low), and accept it fully and act and feel appropriately.
If I, my friends, my family, my country, my species, and my planet are going to die, I want to know. I want to know not only so I can do my part to make that not happen, but I also want to know so that I can behave the way I want to on my deathbed. So I can prepare myself to comfort others if one day the doom starts to seem inevitable. So I can be maximally grateful for every second I still have on this planet. So I can live without regrets. So I can do good while I still can.
This is hard. I have spent a lot of time struggling with accepting all of this. However, I think I'm getting there. And I think it has brought me to a much better place, both for myself and my planet, than where I would have ended up if I had chosen to act as if I was going to be ok.
I don't think this global optimum is for everyone. At least not right now. I don't tell most of my friends and family about my perspective on doom. Especially not unprompted. Some people can't help, and some people will suffer significantly if they knew.
But for those of us who can, let's try.
it is best to focus on the 1-X fraction of the probability space that you control
Crewmate: "Captain, we are hurtling towards the iceberg!"
Captain: "What probability do you give that we will hit?"
Crew: "Our best engineers say 80% in the next 5 to 10 minutes, sir!"
Capt: "It's best to focus on the 20% likelihood we don't hit; you can probably do something about the worlds where that is true. Don't worry about the 80% worlds worlds where we are doomed; we're doomed anyway if that's the case!"
Crew: "But Captain, if I think about the 80%, I might be able to mitigate it!"
Capt: "You will be OK!"
Damn, this doesn't bode well if these are our captains...
Denial isn't just a river in Egypt, eh?
Always glad to see any attempt to balance the bad vibes with hope. Happy New Year :)
Regarding the "OK" debate, I would put forth that perhaps a sentiment worth valuing is that, either way, we will continue to "be", which I think/hope many will agree is likely.
I already thought the big-lab alignment folks were unserious, unhelpful, and unlikely to speak up in recognition of acute danger. This has, alas, strengthened my convictions. I pray unconfidently that this article is unrepresentative of the quality and tenor of the strategic thinking inside the labs. Also,
(as it is mine to some extent)
is darkly funny. Yes, to some extent your job is to reduce the chance of everything being destroyed... and to some extent, it's increasing the share value of OpenAI.
I thank you for the data embodied in this post.
"You will be OK", he says on the site started by the guy who was quite reasonably confident that nobody will be OK.
Seeing this post and its comments made me a bit concerned for young people around this community. I thought I would try to write down why I believe most folks who read and write here (and are generally smart, caring, and knowledgable) will be OK.
I agree that our society often is under prepared for tail risks. As a general planner, you should be worrying about potential catastrophes even if their probability is small. However as an individual, if there is a certain probability X of doom that is beyond your control, it is best to focus on the 1-X fraction of the probability space that you control rather than constantly worrying about it. A generation of Americans and Russians grew up under a non-trivial probability of a total nuclear war, and they still went about their lives. Even when we do have some control over possibility of very bad outcomes (e.g., traffic accidents), it is best to follow some common sense best practices (wear a seatbelt, don't drive a motorcycle) but then put that out of your mind.
I do not want to engage here in the usual debate of P[doom]. But just as it makes absolute sense for companies and societies to worry about it as long as this probability is bounded away from 0, so it makes sense for individuals to spend most of their time not worrying about it as long as it is bounded away from 1. Even if it is your job (as it is mine to some extent) to push this probability down, it is best not to spend all of your time worrying about it, both for your mental health and for doing it well.
I want to recognize that, doom or not, AI will bring about a lot of change very fast. It is quite possible that by some metrics, we will see centuries of progress compressed into decades. My own expectation is that, as we have seen so far, progress will be both continuous and jagged. Both AI capabilities and its diffusion will continue to grow, but at different rates in different domains. (E.g., I would not be surprised if we cured cancer before we significantly cut the red tape needed to build in San Francisco.) I believe that because of this continuous progress, neither AGI nor ASI will be discrete points in time. Rather, just like we call recessions after we are already in them, we will probably decide on the "AGI moment" retrospectively six months or a year after it had already happened. I also believe that, because of this "jaggedness", humans, and especially smart and caring ones, would be needed for at least several decades if not more. It is a marathon, not a sprint.
People have many justifiable fears about AI beyond literal doom. I cannot fully imagine the way AI will change the world economically, socially, politically, and physically. However, I expect that, like the industrial revolution, even after this change, there will be no consensus if it was good or bad. Us human beings have an impressive dynamic range. We can live in the worst conditions, and complain about the best conditions. It is possible we will cure diseases and poverty and yet people will still long for the good old days of the 2020's where young people had the thrill of fending for themselves, before guaranteed income and housing ruined it.
I do not want to underplay the risks. It is also possible that the future will be much worse, even by my cynical eyes. Perhaps the main reason I work on technical alignment is that it is both important and I am optimistic that it can be (to a large extent) solved. But we have not solved alignment yet, and while I am sure about its importance, I could be wrong in my optimism. Also as I wrote before, there are multiple bad scenarios that can happen even if we do "solve alignment."
This note is not to encourage complacency. There is a reason that "may you live in interesting times" is (apocryphally) known as a curse. We are going into uncharted waters, and the decades ahead could well be some of the most important in human history. It is actually a great time to be young, smart, motivated and well intentioned.
You may disagree with my predictions. In fact, you should disagree with my predictions, I myself am deeply unsure of them. Also, the heuristic of not trusting the words of a middle aged professor has never been more relevant. You can and should hold both governments and companies (including my own) to the task of preparing for the worst. But I hope you spend your time and mental energy on thinking positive and preparing for the weird.
Addendum regarding the title (Jan 1, 2026):
By the title "you will be OK", I obviously do not mean that every reader will be OK. I also do not mean that there is 100% guarantee that AI's impact on humanity would not be catastrophically bad. I would not trust anyone who guarantees they know how AI will turn out. I also am very clearly stating my personal beliefs - as I say in the article you may well disagree with my predictions, and it is your choice how much value to place on them.
What I mean by "you will be OK" is:
1. A prediction: I believe the most likely outcome is that AI will lead to a vast improvement in the quality of lives for the vast majority of people, similar in scale to the improvement in our lives compared to pre industrial times. Moreover, I believe that, assuming they take care of their physical and mental health, and do not panic, many, probably most, young LessWrong people are well positioned to do very well, and both take advantage of AI as well as help shape it. But this is only one outcome of many.
2. A working hypothesis: I propose that even though there are multiple possible outcomes, including ones where you, I, and everyone, will very much not be OK, people should live their day to day under the hypothesis they will be OK. Not just because I think that is the most likely outcome, but also because, as I said, it is best not to dwell on the parts of the probability space that are outside your control. This was true for most people during the cold war regarding the possibility of a total nuclear war, and is true now.
I do not mean that you should be complacent! And as I said, this does not mean you should let governments, and companies, including my own, off the hook! There is a similar dynamic in climate change, where people get the sense that if they are not "maximally doomerish" about climate change and claim that it will destroy the world then they are being complacent and doing nothing. This is wrong, and seeing climate change as fatal is not just bad for one's mental health, and can have negative impact on your life decisions, but also can lead to wrong tradeoffs.
I really like Kelsey Piper's quote from the substack I linked above:
Arguably a more accurate title would have been "I believe that you will most likely will be OK, and in any case should spend most of your time acting under this assumption." But I will leave the shorter and less complete title as it is.