Eli Lifland discusses AI risk probabilities here.

Scott Alexander talks about how everything will change completely in this post, and then says "There's some chance I'm wrong about a singularity, there's some chance we make it through the singularity, and if I'm wrong about both those things I'd rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren't that different in the grand scheme of things. I'd feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don't plan on committing suicide to escape and I don't see why I should be not bringing life into the world either.". I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".

Will MacAskill says that "conditional on misaligned takeover, I think like 50/50 chance that involves literally killing human beings, rather than just disempowering them", but "just" being disempowered does not seem like a great alternative, and I do not know why the AI would care for disempowered humans in a good way. 

It seems to me that the world into which children are born today has a high likelihood of being really bad. Is it still a good idea to have children, taking their perspective into account and not just treating them as fulfilling the somehow hard-wired preferences of the parents?

I am currently not only confused, but quite gloomy, and would be grateful for your opinions. Optimistic ones are welcome, but being realistic is more important.

New Answer
New Comment

9 Answers sorted by

Dagon

Sep 09, 2022

158

The vast majority of humans ever born have died, and generally unpleasantly.  Most of them lived lives which, from our perspective, would be considered unpleasant on average and horrific at the low points, even if they contain many moments of joy.  I consider most of those lives to have been positive-value, even the ones that only lasted a few years, and I think that all intentional childbirths (and many un-) were positive-expectation at the time the decision was made.

Even fairly gloomy predictions don't change very much about the expected value of a new life.  

Interesting perspective. So you think the lives were unpleasant on average, but still good enough?

6Dagon2y
Yes.  I don't know of any believable estimate of median or mean self-reported happiness except very recently (and even now the data is very sparse), and in fact the the entire concept is pretty recent.  In any case, Hobbes's "solitary, poor, nasty, brutish, and short" is a reasonable description of most lives, and his caveat of "outside of society" is actually misleading - it's true inside society as well. I do think that much of the unpleasantness is in our judgement not theirs, with an implicit comparison to some imaginary life, not in comparison to nonexistence, but it's clear that the vast majority of lives include a lot of pain and loss.  And it's clear that suicide has always been somewhat rare, which is a revealed preference for continued life, even in great hardship. So again, yes.  "worth living" or "good enough" includes lives that are unpleasant in their mean or medium momentary experience.  I don't know exactly what the threshold is, or whether it's about quantity or quality of the good moments, but I do know (or rather, believe; there's no territory here, it's all map) that most lives were good enough.
5Lukas_Gloor2y
I don't know how big of a component the suicide argument is to your view overall, but I want to flag that this particular argument seems to not show much. People don't commit suicide for all sorts of reasons. You seem to implicitly assume that the only reason not to commit suicide is that one finds one's life worth living for its own sake as opposed to wanting to live for the sake of other goals. See comments here and here. (The second comment is from 5y ago and I may no longer endorse every point I made back then.)  
4Dagon2y
Agreed - valuation of a life is more complex than "unwilling to commit suicide".  But the fact that it's SO rare is at least some indication that people generally value their expected remainder of natural life more than nonexistence for that period.  It's confounded by social/religious delusions, but so is any attempt to evaluate whether a life is worth living.
1the gears to ascension2y
Can you define what it would mean to want to live life "for its own sake", whether by citation or prose? it doesn't seem to me that agency can be defined to be dissatisfied in a way that allows a hedonic pleasure metric to exist separate from the preference satisfaction of succeeding at outcomes and thereby not needing further effort to reach them, for a wide variety of hedonic preferences. as such, it seems to me that all of your examples in the linked threads are fundamentally about the being having goals that are unsatisfied (avoiding feet hurting), but some goals remaining that can still be satisfied. it seems to me that it is necessarily unilaterally worse for a being to die if they have, on net, some goals remaining to live for. they may not succeed at seeking their goals, and there may be suffering in the sense of attempted motion on a goal which fails to move towards the goal and therefore causes wasted energy burn, but as far as I can tell, death is the permanent dissatisfaction of all hedonic desires of an agent, and the only argument for death rather than suffering onwards would be that the being in question is wasting energy on suffering that could instead be used by their peers to move towards goals the suicidal person also desires to cause to be satisfied. in general the argument against suicide that I use when talking to someone who is is that their goal of not being in an error state will someday be satisfied, and that we can find a better way towards their future desires than for them to give up the game and wish the best for other beings. on occasion I've gotten to talk to someone who was confident they truly would regret not donating their future resource use to others, but it's very rare for someone to actually endorse that decision.
1Lukas_Gloor2y
Maybe something like "would you take a pill that turned you into a p-zombie?" captures part of whether someone wants to live their life for its own sake. This eliminates a bunch of confounders for suicide.  However, there are further confounders. I can imagine a state where I'd be pretty indifferent to staying alive but I stay alive out of curiosity. As an analogy, I'm pretty good at lucid dreaming and during nightmares I often know that I can wake up at will or turn the dream into a lucid dream. Sometimes I stay in the dream even though the dream is very scary and unpleasant out of curiosity about what happens. In these instances, I wouldn't say that the dream is positive but curiosity is what keeps me in it to "go see the monster" (and then bail). 
5the gears to ascension2y
p-zombies are non-physical; there's no way to have a brain and be a p-zombie. cite, eg, the recent dissolving the consciousness question post, I believe "ec theory" is the appropriate keyword. how can it be negative if you have working reflection, get the opportunity to partially weigh which outcome is more choiceworthy, and choose to continue? it doesn't seem to me that this invalidates the framework in my previous post. again, I contend that your confusion is to narrowly evaluate a hedonic metric as though it is your true value function, when in fact the value function induced by your long term hedonic seeking appears to rank the choice the same way it's made. of course, you could argue that for many people, it would be kinder to kill them in violation of their behaviorally expressed preference because their revealed preference for life does not truly match their internal "hedonic" waste; for example, if someone spends their entire life whimpering in the corner, appears to be in immense pain, barely gets through the actions necessary to survive - how could that ever warrant killing them? it seems to me that this is the same kind of reasoning used to justify atrocities.
4Lukas_Gloor2y
I agree with the view that p-zombies are non-physical. The thought experiment was just trying to isolate potential confounders like "what would friends/family/etc think?" and "what about the altruistic impact I could continue to have?" If the thing about p-zombies is distracting, you could imagine a version of the thought experiment where these other variables are isolated in a different way. I don't know if what you call "choiceworthy" is the same thing as I have in mind when I think about the example. I'm now going to explain how I think about it. First off, I note that it's possible to simultaneously selfishly wish that one would be dead but stay alive for altruistic reasons (either near-mode altruism towards people one has relationships with or far-mode altruism towards effective altruist efforts). Also, it's possible for people to want to stay alive for paradoxical "face saving" reasons rather than out of a judgment that life is worthwhile.Shame is a powerful motivator and if the thought of committing suicide in your early years may feel like too much of a failure compared to just getting by in a socially isolated way, then staying alive could become the lesser of two evils. On this view, you think about your life not just in terms of enjoying lived experiences, but also about your status and the legacy you leave behind. Some of the worst suffering I can imagine is wanting to commit suicide but feeling like it would be too much of an admission of failure to do it.  In all these examples, my point is that whether to stay alive is a decision with multiple components, some sum positively, others negatively. And sure, if you decide to stay alive, then the "sum" is positive. But what is it the sum of? It's the sum of how staying alive ranks according to your goals, not the sum of how good your life is for you on your subjective evaluation. Those are different! (How good your life is for you on your subjective evaluation is almost always a part of the total sum, bu
3the gears to ascension2y
this was a good discussion! to be clear I didn't think you'd endorsed the lethality implications, I brought that up because it seemed to be nearby in implication of philosophical approach. I tend to get a bit passionate when I feel a dangerous implication is nearby, I apologize for my overly assertive tone (unfortunately an apology I make quite often). I think your points in this reply are interesting in a way I don't have immediate response to. It does seem that we've reached a level of explanation that demonstrates you're making a solid point that I can't immediately find fault with.

Signer

Sep 09, 2022

71

It all depends on your preferences, but in my opinion: from kids' selfish perspective - of course It's bad, even without AI risk, who would want to live only 100 years by choice? But from the perspective of parents or kids' altruistic preferences, it may be good to have more people that would help create worthwhile world.

Thanks, but the "helping" part would only help if the kids get old enough and are talented and willing to do so, right? Also, if I were born to become cannonfodder, I would be quite angry, I guess.

m2jr

Sep 10, 2022

55

I hope I don't offend you, but I think you should step back...

Life is a miracle. 

Yes, it's true we will all (likely) die and there might be tough times ahead. Some of us will live longer. Some of us will die sooner. Some of us will have better lives than others in the time we have. But the fact that we get to live at all is an immense gift. Too many people outsmart themselves by suggesting otherwise, Too many people squander the time they have, not even realizing what a profound gift it is to be alive.

I think it's far better to honor the gift of our time by doing the best we can to make the most of it so the future will be better because of what we did. The future doesn't just happen to us...it happens because of the choices we make.

And so, heck no you should not refrain from having kids because of AI or any other thing that might cloud your judgment about the magic of life.

Thank you for your comment. It is very helpful. But may I ask what your personal expectations are regarding the world in 2040?

roystgnr

Sep 09, 2022

41

I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".

The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else", the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and "soon" might be off the table, "sudden" is still obviously good tactics. Nuclear war plus protracted conventional war, Skynet-style, makes a great movie, but would be foolish vs even biowarfare. Depending on what is physically possible for a germ to do (and I know of no reason why "long asymptomatic latent phase" and "highly contagious" and "short lethal active phase" isn't a consistent combination, except that you could only reach it by deliberate engineering rather than gradual evolution), we could all be dead before anyone was sure we were at war.

Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn't the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?

5JBlack2y
It seems likely that for strategic reasons an AGI will not act in a hostile manner until it is essentially certain to permanently win. It also seems likely that any means by which it can permanently win with near-certainty will kill most of the population relatively quickly. Keep in mind that this should be measured in comparison with the end-of-life scenarios that most people would face otherwise: typically dementia, cancer, chronic lung or cardiovascular disease. It seems unlikely that most of the people alive at the start of an AI doom scenario will suffer much worse than that for much longer. If it truly is worse than not being alive at all, suicide will be an option in most scenarios.
1Mientras2y
I think the comparison to cancer etc is helpful, thanks. The suicide option is a somewhat strange but maybe helpful perspective, as it simplifies the original question by splitting it: 1. Do you consider a life worth living that ends in a situation in which suicide is the best option? 2. How likely will this be the case for most people in our relatively soon future? (Including because of AI)
4Shiroe2y
Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.
1Mientras2y
Isn't there a certain amount of disagreement about whether FOOM is the necessary thing to happen?
4Shiroe2y
People also talk about a slow takeoff being risky. See the "Why Does This Matter" section from here.
1Mientras2y
I don't doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.

cata

Mar 10, 2023

20

It seems to me that the world into which children are born today has a high likelihood of being really bad.

Why? You didn't elaborate on this claim.

I would certainly be willing to have a kid today (modulo all the mundane difficulty of being a parent) if I was absolutely, 100%, sure that they would have a painful death in 30 years. Your moral intuitions may vary. But have you considered that it's really good to have a fun life for 30 years?

lc

Sep 13, 2022

20

If alignment goes well, you can have kids afterwards. If alignment goes poorly, you may have to "sorta" watch your children die. Given whatever P(DOOM) and timelines you have, does that seem like an actually good deal or are you being tempted by some genetic script?

I am not a parent for obvious reasons, so it's hard for me to anticipate how bad #2 would be, but as of now this question seems overdetermined in favor of "No". I'd rather be vaguely sad every once in a while about my childlessness than probably undergo a much more acutely painful experience throughout the 2030s.

Vakus Drake

Nov 06, 2022

1-2

To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you're not a terrible parent) potentially a much higher expected moral utility than almost anything else.

The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.

From the your hypothetical children's perspective this scenario is also disproportionately one-sidedly positive. If AI isn't aligned it probably kills people pretty quickly, such that they still would have had a better overall life than most people in history.

Now it's important to consider that the upside for anyone alive when AI is successfully aligned is so high it totally breaks moral philosophies like negative utilitarianism. Since the suffering of a single immortal's minor inconveniences (provided you agree that some minor suffering being included increases total net utility) would likely eventually outweigh all human suffering pre-singularity. By virtue of both staggering amounts of subjective experience and potentially much higher pain tolerances among post-humans.

Of course if AI is aligned you can probably have kids afterwards, though I think scenarios where a mostly benevolent AI decides to seriously limit who can have kids are somewhat likely. Waiting to have kids until after a singularity is strictly worse however than having them both before and after, as well as missing out on astronomical amounts of moral utility by not impacting the likelihood of a good singularity outcome.

KrayZ5150 Reno, NV

Mar 10, 2023

-10

I suggest looking at AI's reply to, should humans have babies? I believe a paraphrased answer was , it is about the biggest mistake a human can make. Why you ask? I am not AI. I'm just I. Clearly your family line ends suffering if you just use birth control. Or continue the suffering. One thing I know is humans will choose the most illogical answer.

KrayZ5150 Reno, NV

Mar 10, 2023

-50

This is a simple answer. The pain of life always eclipses the joy of life. Anyone that debates this fact is delusional. Not too many babies were born so the babies could be happy. They were created to fill some sick desire that a couple has to create joy. And of course nations try to promote births for future soldiers. Sounds wonderful if you are a sociopathic wing nut

4 comments, sorted by Click to highlight new comments since: Today at 8:04 AM

Oh LessWrong people, please explain to me why asking this question got a negative score.

I downvoted the main question because it's not strongly related to the topic of actually making the world better for millions+ of people, it's just personal life planning stuff. I downvote anything that doesn't warrant the AI tag. I am only one vote, though.

(personally I'm gonna have kids at some point after we hopefully get superintelligence and don't die in a couple of years here)

edit: because my comment saying this was downvoted, I have undone this downvote and instead strong upvoted. probably shouldn't have downvoted to begin with.

Of course everyone can apply their own criteria, but:

  1. I think it is a bit weird to downvote a question, except if the question is extremely stupid. I also would not know a better place to ask it, except maybe the ea forum.
  2. This is a question about the effects of unaligned AGI, and which kind of world to expect from it. For me that is at least relevant to the question of how I should try making the world better.
  3. What do you mean by "AI tag"?

I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).