If you had asked me at any point in my adult life (until recently) whether I wanted to have children eventually, I would've said yes, without hesitation. In recent years I've been telling myself: I don't know how likely these AI doom predictions are, but I'm going to focus on optimizing the "long path" because that's where my decisions actually matter - and so I should still have children just in case.

But now, both as I'm nearing the family-forming stage in my life, and as the AI timeline seems to be coming into sharper focus, I'm finding it emotionally distressing to contemplate having children.

If AI kills us all, will my children suffer? Will it be my fault for having brought them into the world while knowing this would happen? Even if I think we'll all die painlessly, how can I look at my children and not already be mourning their death from day 1? If I were to die right now, I would at least have had a chance to live something like a fulfilling life - but the joy of childhood seems inextricable from a sense of hope for the future. Even if my children's short lives are happy, wouldn't their happiness be fundamentally false and devoid of meaning?

New Answer
New Comment

9 Answers sorted by

cata

Dec 14, 2022

6255

I have a toddler, a couple thoughts:

If I were to die right now, I would at least have had a chance to live something like a fulfilling life - but the joy of childhood seems inextricable from a sense of hope for the future.

I don't agree with this at all. I remember being a happy child and the joy was all about things that were happening in the moment, like reading books and playing games. I didn't think at all about the future.

Even if my children's short lives are happy, wouldn't their happiness be fundamentally false and devoid of meaning?

I think having a happy childhood is just good and nothing about maybe dying later makes it bad.

But now, both as I'm nearing the family-forming stage in my life, and as the AI timeline seems to be coming into sharper focus, I'm finding it emotionally distressing to contemplate having children.

I'm not going to claim that I know what's in your mind, since I don't know anything about you. But from the outside, this looks exactly like the same emotional dynamic that seems to be causing a lot of people to say that they don't want to have kids because of climate change. I agree with you that AI risk is scarier than climate change. But is it more of a reason to not have kids? It seems like this "not having kids" conclusion is a kind of emotional response people have to living in a world that seems scary and out of control, but I don't think that it makes sense in either case in terms of the interest of the potential kids.

Finally, if you are just hanging out in community spaces online, the emotional sense of "everyone freaking out" is mostly just a feedback loop where everyone starts feeling how everyone else seems to be feeling, not about justified belief updates. Believe what you think is true about AI risk, but if you are just plugging your emotions into that feedback loop uncritically, I think that's a recipe for both unnecessary suffering and bad decisions. I recommend stepping back if you notice the emotional component influencing you a lot.

gjm

Dec 14, 2022

1420

If AI kills us all, will my children suffer?

My feeling is that in most AI-kills-us-all scenarios, the AI kills us all quickly.

Will it be my fault for having brought them into the world while knowing this would happen?

You don't know that this will happen, so no. Arguably it will be your fault for having brought them into the world while knowing it might happen -- but we all already know that if we have children they are likely to die eventually, and that they might suffer any quantity of the slings and arrows of outrageous fortune. Those of us who have children generally either haven't tried to weigh the good against the bad or else have decided that the good outweighs the bad; it is not obvious to me that the risk of AI catastrophe makes a big difference to that calculation, but obviously what you think about that will depend on how likely you think the various possible kinds of catastrophe are.

Even if I think we'll all die painlessly, how can I look at my children and not already be mourning their death from day 1?

I'm sorry to be the bearer of bad news, but any children you have will most likely die anyway in the end, AI or no AI. When I look at my daughter, or at anyone else I care about, I am not "already mourning their death" (unless maybe they are terminally ill) because, well, why should I be? There's plenty else about them to celebrate, plenty of things to pay attention to in the moment; why should their death be what I focus on, if it isn't imminent and if I can't do much about it?

Even if my children's short lives are happy, wouldn't their happiness be fundamentally false and devoid of meaning?

Any notion of "meaning" that tells you that no one's happiness should be celebrated needs throwing out and replacing with something better.

A happy child is a happy child. Their happiness makes the world brighter. If they are in fact inevitably going to be dead five years from now, that is a sad fact but it doesn't nullify the value of their happiness now.

A question you haven't (explicitly) asked: Suppose you refrain from having children now, out of fear of AI catastrophe, and suppose that it turns out that there is no AI catastrophe in the near future. How would you feel about that?

I don't want to claim that you should definitely have children. Maybe you shouldn't. That depends (among other things) on how likely you actually think AI catastrophe is, and how you expect it to unfold if it happens. But I do think that, AI or no AI, catastrophe or no catastrophe, children or no children, you will likely be both happier and more effective in whatever you do if you are able to get past that sense of doom and distress.

Daniel Kokotajlo

Dec 15, 2022

72

I don't think there's a clear answer here. That said, the reason I don't have a second kid is that my timelines dropped from 30% by 2040 to 70%+ by 2040, and are now shorter still.  :'(

I think a big part of it is whether you are doing stuff to reduce AI risk, and whether having a kid would substantially impede your ability to do so. For me I think the answer is yes.

Vaniver

Dec 15, 2022

62

but the joy of childhood seems inextricable from a sense of hope for the future.

People have disagreed about the childhood experience, but--I suspect this is true of the parenthood experience. Like, the difficulty of parenting is frontloaded and the rewards (while they do start off high!) grow with time. You should try to estimate where the crossover point is, and then from there whether it makes sense to try to have kids now.

[Also, my guess is if you're going to have kids, the sooner the better, in part because it's more likely to pay off / rapid AI changes might require lots of attention, which will be less scarce the older your children are.]

Signer

Dec 15, 2022

5-8

Personally, I don't think AI timelines change much, because I never understood how anyone could have such a low threshold for a fulfilling life that they are happy with less than 100 years - until everyone are at least having their own billion-year adventure, it's obviously better to get by with fewer dead people. But children themselves may have their own altruistic preferences and be mostly fine with being born for a chance of a long path.

Ulisse Mini

Dec 15, 2022

50

How can I look at my children and not already be mourning their death from day 1?

Suppose you lived in the dark times, where children have a <50% of living to adulthood. Wouldn't you still have kids? Even if probabilistically smallpox was likely to take them?

If AI kills us all, will my children suffer? Will it be my fault for having brought them into the world while knowing this would happen?

Even if they don't live to adulthood, I'd still view their childhoods as valuable. Arguably higher average utility than adulthood.

Even if my children's short lives are happy, wouldn't their happiness be fundamentally false and devoid of meaning?

Our lifetimes are currently bounded, are they false and devoid of all meaning?

The negentropy in the universe is also bounded, is the universe false and devoid of all meaning?

Suppose you lived in the dark times, where children have a <50% of living to adulthood. Wouldn't you still have kids? Even if probabilistically smallpox was likely to take them?

Just wanna add that each of you children individually having a 50% chance of survival due to smallpox is different from all of your children together having 50% chance of survival due to AI (i.e. uncorrelated vs correlated), so some people might decide differently in these 2 cases.

Dagon

Dec 15, 2022

40

For millenea, parents have had children, knowing they will suffer many things and then die.  For decades, many have expected it to happen sooner rather than later.  Neither immortality nor certainty of long life free of suffering was ever on the table, for  anyone. 

You haven't decided that your future life is negative-expectation, right?  You're expecting some joy, satisfaction, and good work/effort, along with the pain and eventual end.  Why would you expect your counterfactual child to prefer not to exist, for whatever time and experiences they might have?

I am having a kid this year. I also have worked for MIRI for about four years, and CFAR before that; I'm in the cluster that is approximately the doomiest of doom-forecasters.

I think some people have already mostly said what I want to say, in their own answers, but I wanted to add the strength of my ... money-where-my-mouth-is example?

Separately, some people have already pushed back on this, but I want to push back much harder:

but the joy of childhood seems inextricable from a sense of hope for the future

This is a super false-to-the-experience-of-most-children sentiment; it might've been true for you and it's probably true for nonzero children, but it is extremely wrong as a statement about children in general. I don't know what perspective it emerged from, but I'm fighting back ... feeling pretty offended, on behalf of young children (and I dunno how much I'm succeeding). It is very very very wrong, and it has harmful social effects, up to (for example) possibly being the swing vote that causes an entire person not to exist, when otherwise it would've seemed good to you to allow them to exist, but also including smaller things like delegitimizing the everyday experience of a child as being somehow fundamentally impoverished, lesser or less important than that of an adult.

"NO," in other words, to your last question.

A valid concern about whether-or-not-to-have-kids that I do think is timeline-adjacent is something like "there will be a lot of time/energy/attention/happiness consumed prior to it 'paying off' in a sufficient sense for both you and the kid."

Like, parents tend to dip pretty hard into a place that's sustainable for a year or three, but would be unsustainable/bad if it were "this is just what life is like for me now, forever."

So I think "I have a lot of weight on us being two years away from disaster" is a pretty solid argument for "okay, well, let's not spend those two years pregnant or with an infant."

stavros

Dec 15, 2022

-4-31

At the outset, I'll say that the answer to 'should you have kids?' in general, is probably not. I'll also say that I've seen/had this discussion dozens of times now and the result is always the same: you're gonna do what you want to do and rationalize it however you need to. The genes win this fight 9 times out of 10.

If you're rich (if you reasonably expect to own multiple properties and afford all of lifes luxuries for the rest of your life), it's probably okay - you won't face financial ruin and your children will be insulated from the worst of what's to come, probably.

(It's still insanely bad for the environment though.)

AI is just one of a long long long long list of horrible things we have to look forward to in the coming decades. Not even mentioning all the horrible things that are already here - being a kid is no picnic nowadays.

One of the podcasts I listen to, The Great Simplification by Nate Hagens, has interviewed dozens of people whos focus is on understanding the world and anticipating the future, and toward the end of each episode he asks each of them the same few questions - one of which is something like 'what would you say to children today? What advice would you give?' - and almost without exception the people he's interviewing express remorse about the legacy they've left for their children and grandchildren: we're sorry, we tried, we failed, we're so so sorry.

If you have kids, which I expect you will, the one piece of advice I'd give you is to whole-ass it, maximum effort. Keep them away from the internet, teach them yourself, give them every opportunity to discover and create the relationships and passions that they will need to flourish, nurture them at every opportunity.

karma strong upvote, agreement downvote. score of approximately 1 seems reasonable for this comment to me, though I expect you'll be karma downvoted again if you don't rephrase to be a bit kinder in verbal impact. I don't actually think you're wrong about the trajectory of things if we don't pull up. I think we can get out of the hole, but dang, we sure seem to be in one. Startrek is not out of the question, but my take is things might get pretty bad before they get amazing.

8 comments, sorted by Click to highlight new comments since: Today at 9:26 PM

I strongly upvoted this post, not because I agree with the premises or conclusions, but because I think there is a despair that comes with inhabiting a community with some very vocal short-timeliners, and if you feel that despair, these are the sort of questions that you ask, as an ethical and intelligent person. But you have to keep on gaming it all the way down; you can't let the despair stop you at the bird's-eye view, although I wouldn't blame a given person for letting it anyway.

There is some chance that your risk assessment is wrong, which your probabilistic world-model should include.  It's the same thing that should have occurred to the Harold Camping listeners.    

The average AI researcher here would probably assign a small slice of probability mass to the set of nested conditions that would actually implicate the duty not to bring children into the world: 

  1. AGI is created during your kid's life.
  2. AGI isn't safe.
  3. AGI isn't friendly, either.
  4. AGI makes the world substantially worse for humans during your kid's life. (Only here, in my opinion, do we have to start meaningfully engaging with the probabilities.)
  5. AGI kills all humans during your kid's life. (The more-lucid thinkers here see the AGI's dominant strategy as, Kill all humans ASAP and spending as few resources as possible. This militates for quick and unexpected, certainly quicker and more unexpected than COVID.) 
  6. AGI kills all humans violently during your kid's life. (There's no an angle for the AGI here, so why would it do it? Do you meaningfully expect that this might happen?)
  7. AGI kills all humans violently after torturing them during your kid's life. (Same, barring a few infamous cognitohazards.)

From the individual perspective, getting nonviolently and quickly killed by an AGI doesn't seem much worse or better to me than suddenly dying because e.g. a baseball hit you in the sternum (except for the fleeting moment of sadness that humanity failed because we were the first Earth organism to invent AGI but were exactly as smart as we needed to be to do that and not one iota smarter).  There are background risks to being alive.

but the joy of childhood seems inextricable from a sense of hope for the future

That doesn't match my recollection of what the joy of childhood was like. To the extent that I can recall, a lot of the joy came specifically from the fact that I didn't think of the future, and most of the things that I enjoyed were ones that felt intrinsically meaningful to do in that moment. As an adult, there are lots of considerations of "do I have the time for this", "is this actually useful to do", etc.; as a child, none of that mattered - something being fun to do was all the reason I needed for doing it. (And I had a lot more fun as a result.)

I did have an abstract understanding that one day I'll be an adult, but especially pre-puberty, there was no real expectation for that because it felt so remote. It was a thing that I understood intellectually, but which felt utterly unreal and impossible to imagine emotionally.

This seems still relevant: https://astralcodexten.substack.com/p/please-dont-give-up-on-having-kids

The type of risk is different, but the considerations are not, or not much.

[-]Tenoke1y1511

It seems quite different. Tha main argument in that article is that Climate Change wouldn't make the lives of readers' children much worse or shorter and that's not the case for AI.

I’ve been in the rationalist community since 2011. I too am focused on the “long path”. And I’d say my timelines are pretty short. But I have two young kids and I do not regret it. In fact we’ll probably have a third one.

Don't despair; we're going to solve this. ai safety has recently made a series of breakthroughs we expected to not have, and my intuitive sense of safety trajectory now thoroughly matches my capabilities expectations. The big capabilities teams are starting to really come around on what safety is all about. It's understandable to be worried, but I think you should live your life as if the only challenge is climate change.

Don't underestimate climate change, though. I'd hold off on having kids until it's clear that the carbon transition will actually occur, which seems to me to depend on fusion. I'd give it another year or two, personally. Up to you, of course. But I think AI tech we'll get under control and use it to push past the biochemistry and diplomacy challenges we need to solve to get the carbon back in the ocean and ground.

edit: I've gotten some karma downvotes - I'm not surprised almost no one agrees that we're going to solve it, it wouldn't feel like that from the inside right now; seems like the same kind of objection the researchers causing short capabilities timelines have to short capabilities timelines. but I'm curious if the karma downvotes imply I should have phrased this better or so, and if so, whether anyone would be willing to comment on how to improve my phrasing.

ai safety has recently made a series of breakthroughs we expected to not have

Please say more!

  1. Discovering agents - lesswrong - made interesting progress on the fundamental definition of agency that seems very promising to me
  2. people seem to be converging on a similar approach that engages directly and has clear promise
  3. the alignment research field is accumulating velocity among traditional research groups, which seem likely to be much more effective than theoretical research. since the theoretical researchers seem to be agreeing with empirical research, this seems promising to me
  4. I have semi-private views, argued sloppily here (may write a better presentation of this shortly) about what's tractable, from previous middling-quality capabilities work I did with jacob_cannell, that lead me to believe we can guide models towards the grokkings we want more easily than it seems like we can with current models, and that work on formal verification will be able to plug into much larger models than it can now once the larger models can reach stronger grokking capability levels. I've argued this one in some places but I'd rather just let deepmind figure it out for themselves and work on what they'll need in terms of objectives and verification tools when they figure out how to make more internally coherent models.
  5. I'm very optimistic about the general "LOVE in a simbox is all you need" (review 1, review 2; both are less optimistic than me) approach once the core of alignment is working well enough. I suspect this can be improved significantly by nailing down co-empowerment and co-protection in terms of mutual information and mutual agency preservation. That is what I'm actually vaguely working on.