Your Substack subtitle is "I won't get to raise a family because of AGI". It should instead be "I don't want to raise a family because of AGI"
I think it's >90% likely that if you want and try to, you can raise a family in a relatively normal way (i.e. your wife gives birth to your biological children and you both look after them until they are adults) in your lifetime.
Not wanting to do this because those children will live in a world dissimilar to today's is another matter, but note that your parents also raised you to live in a world very dissimilar from the world they grew up in, but were motivated to do it anyway! So far, over many generations, people have been motivated to build families not by the confidence that their children will live in the same way as they did, but rather by other drives (whether it's a drive towards reproduction, love, curiosity, norm-following, etc.).
I also think you're very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don't see why either of those things stop you from having a family.
I thought this was going to take the tack that it's still okay to birth people who are definitely going to die soon. I think on the margin I'd like to lose a war with one more person on my team, one more child I love. I reckon it's a valid choice to have a child you expect to die at like 10 or 20. In some sense, every person born dies young (compared to a better society where people live to 1,000).
I'm not having a family because I'm busy and too poor to hire lots of childcare, but I'd strongly consider doing it if I had a million dollars.
I mean, I also think it's OK to birth people who will die soon. But indeed that wasn't my main point.
(indeed, historically around half of children ever born died before the age of 15, so if a 50% chance of them not surviving to adulthood were a good reason not to have children then no-one "should" have had children until industrial times)
Having a child probably brings online lots of protectiveness drives. I don't think I would enjoy feeling helpless to defend my recently born child from misaligned superintelligence, especially knowing that what little I can do to avert their death and that of everyone else I know is much harder now that I have to take care of a child.
Excited to be a parent post singularity when I can give them a safe and healthy environment, and have a print-out of https://www.smbc-comics.com/comic/2013-09-08 to remind myself of this.
I disagree. Perhaps I'm biased because I'm an Antinatalist, but I don't personally think it's ethical to create a thinking, feeling life that you know will end in less time than average.
Yes, it is true that people do die young. You can't guarantee that your child won't die of cancer at 10 or in a car crash at 20. But the difference is that no one sets out to create a child that they Know will die of cancer at 10, no matter how badly they want a child.
Imagine being that child and being told that your parent did not expect you to have some of the same age-based experiences as them ( learning to drive, first kiss, trying alcohol). I'm very sure you would feel like a cruel joke had been played on you.
There's a cut of Blade runner where Rutger Hauer's character tells his creator
" I want more life, Father"
Yes, people have had kids in the past where the life expectancy was lower. But it's important to note that they were under the impression that it was impossible to live much longer than they had seen people live. As far as they were concerned when you turned 70 you were as good as dead.
But they did not expect their children's lives to be cut short. Certainly, an illness or accident could take them ( not to mention infant mortality), but the assumption was that their children would eventually have children of their own. For most of human history we have lived in "normal conditions" where the above assumption would be correct in the vast majority of cases.
We of the 21st century do not live in normal conditions. In short, I believe creating any human life is unethical, but creating one you fully expect to end quickly is even more unethical.
If my parents had known in advance that I would die at ten years old, I would still prefer them to have created me.
In "less time than average", which average? In the "create a child that they know will die of cancer at 10" thought experiment, the child is destined to die sooner than other children born that day. Whereas in the "human extinction in 10 years" thought experiment, the child is destined to die at about the same time as other children born that day, so they are not going to have "less time than average" in that sense. Those thought experiments have different answers by my intuitions.
My intuitions about what children think are also different to yours. There are many children who are angry at adults for the state of the world into which they were born. Mostly they are not angry at their parents for creating them in a fallen world. Children have many different takes on the Adam and Eve story, but I've not heard a child argue that Adam and Eve should not have had children because their children's lives would necessarily be shorter and less pleasant than their own had been.
I don't see why either of those things stop you from having a family.
I think we might be using different operationalizations of "having a family" here. I was imagining it to mean something that at least includes "raise kids from the age of ~0 to 18". If x-risk were to materialize within the next ~19 years, I would be literally stopped from "having a family" by all of us getting killed.
But under a definition of "have a family" which is means "raise a child from the age of ~0 to 1", then yeah, I think P(doom) is <20% in the next 2 years and I'm probably not literally getting stopped.
Also to be clear, my P(ASI within our lifetimes) is like 85%, and my P(doom) is like 2/3.
Yeah I think it's very unlikely your family would die in the next 20 years (<<1%) so that's the crux re. whether or not you can raise a family
Huh, those are very confident AGI timelines. Have you written anything on your reasons for that? (No worries if not, am just curious).
The <1% comes from a combination of:
Very rough numbers would be p(superintelligence within 20 years) = 1%, p(superintelligence kills everyone within 100 years of being built) = 5%, though it's very hard to put numbers on such things while lacking info, so take this as gesturing at a general ballpark.
I haven't written much about (1). Some of it is intuition from working in the field and using AI a lot. (Edit: see this from Andrej Karpathy that gestures some of this intuition).
Re (2), I've written a couple relevant posts (post 1, post 2 - review of IABIED), though I'm somewhat dissatisfied with their level of completeness. The TLDR is that I'm very skeptical of appeals to coherence argument style reasoning, which is central to most misalignment-related doom stories (relevant discussion with Raemon).
You should've said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.
It should instead be "I don't want to raise a family because of AGI"
Feels like you're in Norway in medieval times, and some dude says he doesn't know if he can start a family because of this plauge thats supposedly wreaking havoc in Europe, and worries it could come to Norway. And you're like "Well, stuff will change a lot, but your parents also had tons of worries before giving birth to you. ", and then later its revealed you don't think the plague actually exists, or if exists is not worse than the cold or something.
She did say this in her original comment. And it's not really similar to denying the black death, because the black death, cruciallly, existed.
Well, for one, AGI is just likely to supercharge the economy and result in massive, albeit manageable (assuming democratic institutions survive) societal change. ASI is another thing altogether, in which case widespread death becomes orders of magnitude more likely.
I also think you're very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don't see why either of those things stop you from having a family.
The "also" in this sentence seems to imply that the disagreement about timelines and the level of risk posed by advanced AI is not your main point?
Correct. Though when writing the original comment I didn't realize Nikola's p(doom) within 19yrs was literally >50%. My main point was that even if your p(doom) is relatively high, but <50%, you can expect to be able to raise a family. Even at Nikola's p(doom) there's some chance he can raise children to adulthood (15% according to him), which makes it not a completely doomed pursuit if he really wanted them.
I think its reasonable to say you "can't have a family" if you expect whatever children and partner you have to be killed off fairly shortly if you try.
Like a couple with genetic problems making the likely outcome of them having a child be that the child dies when they're 3 years old can reasonably say "we can't have (biological) children".
Even though its technically true that they could have children if they really wanted to.
Huh, even assuming business as usual I'd guess the baseline probability of someone's family dying is not <<0.05%/year (assuming the standard meaning of "<<" as "at least around an order of magnitude less")
(at least in the US -- though guessing from his name Nikola Jurkovic might live somewhere less car-dependent than that)
I still mourn a life without AI
Honestly, if AI goes well I really won't. I will mourn people who have died too early. The current situation is quite bad. My main feeling will probably be of extreme relief at first.
I think with respect to utopia, especially via AI, mourning can make sense now, but not after it actually happens. Now you can see all the things that will never be, but you can't see all the things that are even better that will actually be.
After you will see and feel and live all that is better, and it will be obvious that it is better. Only gratitude makes sense, or perhaps some form of nostalgia, but not genuine grief (for life without AI).
I deeply feel this, especially as I fear that my son has a chance of needing to cyborgize himself to compete in the new world more than I've had to (glasses and phone). I have a preference for him being human to non-human and I'm not sure what the future holds for him.
At the same time, I cannot speak for anyone else and your calculus will be different in different situations, but if the world ended tomorrow I'd still have loved the two years I spent with him and he'd prefer existence to not having existed for that time. If your timelines give you move than 2 years of life of a child, I think it's worthwhile, unless you have high S-risks fears.
It feels like there's a huge blind spot in this post, and it saddens (and scares) me to say it. The possible outcomes are not utopia for billions of years or bust. The possible outcomes are utopia for billions of years, distopia for billions of years, or bust. Without getting into the details, I can imagine S-tier risks in which the AGI turns out to care too much about engagement from alive humans, and things getting dark from there.
Short of pretty much torture for eternity, the "keep humans around but drug them to increase their happiness" scenarios are also distopian and may also be worse than death. Are there good reasons to expect utopia is more likely relative to distopian (with extinction remaining most likely)?
Yes, exactly. C.S. Lewis wrote a very weird science fiction book titled That Hideous Strength, that was about (basically) a biological version of the Singularity.
And there's a scene where one the villains is explaining that with immortality, it will finally be possible to damn people to eternal Hell.
And of course, "Hells" are a significant theme in at least one of Iain M Banks' Culture novels as well.
This is a very obvious corollary: If there exists an entity powerful enough to build an immortal utopia, there is necessarily an entity powerful enough to inflict eternal suffering. It's unclear whether humans could ever control such a thing. And even if we could, that would also mean that some humans in particular would control the AI. How many AI lab CEOs would you trust with the power of eternal damnation?
(This is one of several reasons why I support an AI halt. I do not think that power should exist, no matter who or what controls it.)
Getting AI to terminally care about humans at all seems like a hard target and if our alignment efforts can make it happen, they can probably also ensure that it cares about humans in a good way.
Current LLMs could probably be said to care about humans in some way, but I'd be pretty scared to live in an LLM dictatorship.
If there was a button that would kill me with a 60% probability and transport me into a utopia for billions of years with a 15% probability, I would feel very scared to press that button
This is because the correct answer is option three: try to modify the button to lower the 60 and raise the 15, until such time as a 1-in-5 chance of survival is a net improvement relative to your default situation. I'd be much more likely to press that button if I'd just jumped out of an airplane without a parachute. Or if there was a hundred mile wide asteroid near-guaranteed to hit Earth next Tuesday.
Also, this is the first year where the people close to me are cognizant enough of AI that I can talk to them about life plan derailment expectations and not be dismissed as crazy. I can tell my parents to try to really attend to their health more than they have in the past, and why. I can explain to my wife that hey, we should both expect start surfing a wave of frequent job changes until the concept of a job stops making sense. It's been honestly very freeing to be able to discuss these things somewhere other than this community. I'm still a little hesitant to openly talk to my sisters about what their children's futures might look like, but even that is starting to change.
This is because the correct answer is option three: try to modify the button to lower the 60 and raise the 15, until such time as a 1-in-5 chance of survival is a net improvement relative to your default situation.
Yes, the counterfactual I was imagining in this button world was just living a normal life and dying at the end. If indeed there's a way to shift around the probabilities I'd devote my life to it. Which is what we're doing!
It's been honestly very freeing to be able to discuss these things somewhere other than this community.
I agree. This year I've had the policy of being very direct about what I think about crazy AI futures even with people outside of the AI safety community. I held a powerpoint presentation to my close family members talking about AGI and AI safety and how the world is going to be crazy in the coming decades. When my relatives ask me about having kids, I say "By the time I'd have had kids, if humanity is even around, who knows what the concept of kids will look like. Maybe we'll be growing them in vats. Maybe we'll all be uploaded."
Of course, I don't say all of that every time. Most of the time people aren't in the mood for those sorts of discussions. But people have started taking these arguments more seriously as AI has had more and more of an effect and appeared more and more in the news.
No, (at least for men) it takes much longer than 10 months to make a kid in a way that's worth doing. You have to find a partner willing to do it with you, which takes an unpredictable amount of time. (I guess if you're rich you can hire an egg donor and a surrogate.)
Tangential, but I do think it's a mistake to only think of things in terms of expected value.
I wouldn't press the 60% utopia / 15% death button because that'd be a terrible risk to take for my family and friends. Assuming though that they could come with me, would I press the button? Maybe.
However, if the button had another option, which was a nonzero chance (literally any nonzero chance!) of a thousand years of physical torture, I wouldn't press that button, even if it's chance of utopia was 99.99%.
I consider pain to be an overwhelmingly dominant factor.
I think we have to clarify: the expected value of what?
For example, if I had a billion dollars and nothing else, I would not bet it on a coin flip even if winning would grant +2 billion dollars. This is because losing the billion dollars seems like a bigger loss than gaining 2 billion dollars seems like a gain. Obviously I'm not measuring in dollars, but in happiness, or quality of life, or some other vibe-metric, such that the EV of the coin flip is negative.
It may be hard to distinguish "invalid" emotions like a bias due to an instinctual fear of death, from a "valid" vibe-metric of value (which is just made up anyway). And if you make up a new metric specifically to agree with what you feel, you can't then claim that your feelings make sense because the metric says so.
We could try to pin down "the expected value of what", but no matter what utility function I tried to provide, I think I'll run into one of two issues:
1. Fanaticism forces out weird results I wouldn't want to accept
2. A sort of Sorites problem: I define a step function that says things like "Past a certain point, the value of physical torture becomes infinitely negative" that requires me to have hard breakpoints
I'm not sure if this changes things, but the probabilities of the OP were reversed:
If there was a button that would kill me with a 60% probability and transport me into a utopia for billions of years with a 15% probability, I would feel very scared to press that button, despite the fact that the expected value would be extremely positive compared to living a normal life.
However, if the button had another option, which was a nonzero chance (literally any nonzero chance!) of a thousand years of physical torture, I wouldn't press that button, even if it's chance of utopia was 99.99%.
I often wonder if any AGI utopia comes with a nonzero chance of eternal suffering. Once you have a godlike AGI that is focused on maximizing your happiness, are you then vulnerable to random bitflips that cause it to minimize your happiness instead?
I think as soon as AGI starts acting in the world, it'll take action to protect itself against catastrophic bitflips in the future, because they're obviously very harmful to its goals. So we're only vulnerable to such bitflips a short time after we launch the AI.
The real danger comes from AIs that are nasty for non-accidental reasons. The way to deal with them is probably acausal bargaining: AIs in nice futures can offer to be a tiny bit less nice, in exchange for the nasty AIs becoming nice. Overall it'll come out negative, so the nasty AIs will accept the deal.
Though I guess that only works if nice AIs strongly outnumber the nasty ones (to compensate for the fact that nastiness might be resource-cheaper than niceness). Otherwise the bargaining might come out to make all worlds nasty, which is a really bad possibility. So we should be quite risk-averse: if some AI design can turn out nice, nasty, or indifferent to humans, and we have an chance to make it more indifferent and less likely to be nice or nasty in equal amounts, we should take that chance.
I agree with most of the individual arguments you make, but this post still gives me "Feynman vibes." I generally think there should be a stronger prior on things staying the same for longer. I also think that the distribution of how AGI goes is so absurd, it's hard to reason about things like expectations for humans. (You acknowledge that in the post)
I'm always wondering whether there's something going on here, where - by definition - we can rationally understand how high-value a utopia would be, but since we can't really tell for sure where things will end up, we may be assigning a way to high intuitive probability to it.
Yes, and also the probability that we do not tire of it, and that there are people in a worse position than those in a utopia.
In futures where we survive but our plans all get derailed, why do you expect a utopia? Or, equivalently: In futures where we survive and get a utopia, why do you expect our plans to all get derailed?
I understand that you don't want to have kids who will be killed by ASI and not have any kids of their own.
But you also think that there is a possibility of utopia. Under these assumptions, wouldn't it make sense to wait for utopia and then have kids?
I see that you express reservations about having kids in utopia, of two types:
I agree that 1 would be a showstopper. I would not want to raise kids in a simulation. But I don't see what the problem would be (given the premise of a good post-AGI future) with raising kids in the real world. If you would like the post-AGI utopia, why wouldn't your kids, who would be expected to be more adaptable to new circumstances than you?
Utopians are on their way to end life on earth because they don't understand that iterative x-risk leads to x.
I've resigned myself to accepting that whatever life plans I've had, should I still want them after the singularity, I can step into a simulation of it inhabited by zombies. I'm not that excited about zombies either but my knowledge of the fact that I'm in a simulation can be suppressed, so nothing will feel any different. All the other inhabitants of the simulation will feel 100% real to me. Once I internalized this idea, I stopped lamenting the singularity.
Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area.
It has basically become consensus within the AI research community that AI will surpass human capabilities sometime in the next few decades. Some, including myself, think this will likely happen this decade.
Assuming AGI doesn’t cause human extinction, it is hard to even imagine what the world will look like. Some have tried, but many of their attempts make assumptions that limit the amount of change that will happen, just to make it easier to imagine such a world.
Dario Amodei recently imagined a post-AGI world in Machines of Loving Grace. He imagines rapid progress in medicine, the curing of mental illness, the end of poverty, world peace, and a vastly transformed economy where humans probably no longer provide economic value. However, in imagining this crazy future, he limits his writing to be “tame” enough to be digested by a broader audience, and thus doesn’t even assume that a superintelligence will be created at any point. His vision is a lower bound on how crazy things could get after human-level AI.
Perhaps Robin Hanson comes the closest to imagining a post-AGI future in The Age of Em. The book isn’t even about AGI, it’s about human uploads. But Hanson’s analysis of the dynamics between human uploads is at an appropriate level of weirdness. The human uploads in The Age of Em run the entire economy, and biological humans are relegated to a retired aristocracy which owns vast amounts of capital surrounded by wonders they can’t comprehend. The human uploads don’t live recognizable lives — most of them are copies of other uploads that were spun up for a short period to perform some task, only to be shut down forever after their task is done. The uploads that weren’t willing to die every time they complete a short task are outcompeted and vastly outnumbered by those who were. The selection pressures of the economy quickly bring most of the Earth’s human population (which numbers in the trillions thanks to uploads) back into a malthusian state where most of the population are literally working themselves to death.
Amodei’s scenario is optimistic, and Hanson’s is less so. What they share is that they imagine a world very different to our own. But they still don’t want to entertain that eventually, AIs will vastly surpass human capabilities. Maybe there will continue to be no good written exploration of the future after superintelligence. Maybe the only remotely accurate vision of the future we’ll see is the future that actually happens.
One of the main assumptions behind Amodei’s and Hanson’s scenarios is that humans survive the creation of a vastly more capable species. In Machines of Loving Grace, the machines, apparently, gracefully love humanity. In Age of Em, the uploads keep humanity around mostly out of a continuous respect for property rights.
But the continued existence of humanity is far from guaranteed. Our best plans for making sure superintelligence doesn’t drive us extinct is something like “Use more trusted AIs to oversee less trusted AIs, in a long chain that stretches from dumb AIs to extremely smart AIs”. We don’t have much more than this plan, and this plan isn’t even going to happen by default. It’s plausible that some actors would, if they achieved superintelligence before anyone else, basically just wing it and deploy the superintelligence without meaningful oversight or alignment testing.
If we were in a sane world, the arrival of superintelligence would be the main thing anyone’s talking about, and there would be millions of scientists solely focused on making sure that superintelligence goes well. After a certain point, humanity would only proceed with AI progress once the risk was extremely low. Currently, all humanity has is a few hundred AI safety researchers who are scrambling and duct-taping together research projects to prepare at least some basic safety measures before superintelligence arrives. This is a pretty bad state to be in, and it’s pretty likely we don’t make it out alive as a species.
How has the world reacted to the imminent arrival of a vastly more capable species? Basically not at all. Don’t get me wrong, we’re far from a world where no one cares about AI. AI-related stocks have skyrocketed, datacenter buildouts are plausibly the largest infrastructure projects of the century, and AI is a fun topic to discuss over dinner.
But most people still do not seriously expect superintelligence to arrive within their lifetimes. There are many choices which assume that the world will continue as it is for at least a decade. Buying real estate. Getting a long education. Buying a new car. Investing in your retirement plan. Buying another pair of winter boots. These choices make much less sense if superintelligence arrives within 10 years.
There’s this common plan people have for their lives. They go to school, get a job, have kids, retire, and then they die. But that plan is no longer valid. Those who are in one stage of their life plan will likely not witness the next stage in a world similar to our own. Everyone’s life plans are about to be derailed.
This prospect can be terrifying or comforting depending on which stage of life someone is at, and depending on whether superintelligence will cause human extinction. For the retirees, maybe it feels amazing to have a chance to be young again. I wonder how middle schoolers and high schoolers would feel if they learned that the career they’ve been preparing for won’t even exist by the time they would have graduated college.
I know how I feel. I was hoping to raise a family in a world similar to our own. Now, I probably won’t get to do that.
This entire situation is complicated by the fact that I expect my life to be much better in this world than a hypothetical world without recent AI progress.
To be clear, I think it’s more likely than not that every human on Earth will be dead within 20 years because of advanced artificial intelligence. But there’s also some chance that AI will create a utopia in which we will all be able to live for billions of years, having something close to the best possible lives.
So, from an expected value perspective, it looks like my expected lifespan is in the billions of years, and my expected lifetime happiness is extremely high. I’m extremely lucky to be born at a time when I can expect superintelligence to possibly help me live in a utopian world as long as I’d like. For most of history, you were likely to die before 30, and this wasn’t accompanied by some real chance of living in a utopia for as long as you’d like.
But it’s hard to fully think in terms of expected value terms. If there was a button that would kill me with a 60% probability and transport me into a utopia for billions of years with a 15% probability, I would feel very scared to press that button, despite the fact that the expected value would be extremely positive compared to living a normal life.
A further complication is that, assuming humanity survives superintelligence, it’s pretty likely that technology will enable living out almost any fantasy you might have. So whatever plans people had that were derailed, they could just step in an ultra-realistic simulation and experience fulfilling those plans.
So if I wanted to raise a family in the current world, watch my children discover the world from scratch, help them become good people, why don’t I just step into a simulation and do that there?
It just isn’t the same to me. Call me a luddite, but getting served a simulated family life on a silver platter feels less real and less like the thing I actually want.
And what about the simulated children? Will they just be zombies? Or will they be actual humans that can feel pleasure and pain and are moral patients? If they are moral patients, I would consider it a crime to force them to live in a pre-utopian world. What happens to them once I “die” in the simulation? Does the world just continue, or do they get fished out and put in base reality after a brief post-singularity civics course where lecture 1 is titled “Your entire lives were a lie to fulfill some guy’s fantasy”?
Currently, I’m pretty sure I’d vote to make it illegal to run simulations that take place in the pre-utopian world, unless we have really good reasons to run them, or they’re inhabited by zombies. It just seems so immoral to have the opportunity to add one more person to the utopian population, and instead choose to add one more person to a population of people living much worse lives in a pre-utopian world. And I’m not very excited by the idea of raising zombies. So I don’t expect to “have kids”, in this world or the next.
Many things are true at once:
I feel pretty conflicted about this whole situation:
I’m glad that I exist now rather than hundreds or thousands of years ago. But it sure would be nice if humanity was more careful about creating a new intelligent species. And even if a “normal” life would have been much worse than the expected value of my actual life, there’s still some part of me that wishes things were just… normal.