Views purely my own unless clearly stated otherwise
Correct. Though when writing the original comment I didn't realize Nikola's p(doom) within 19yrs was literally >50%. My main point was that even if your p(doom) is relatively high, but <50%, you can expect to be able to raise a family. Even at Nikola's p(doom) there's some chance he can raise children to adulthood (15% according to him), which makes it not a completely doomed pursuit if he really wanted them.
I mean, I also think it's OK to birth people who will die soon. But indeed that wasn't my main point.
Yeah I think it's very unlikely your family would die in the next 20 years (<<1%) so that's the crux re. whether or not you can raise a family
By the time I'd have had kids
It only takes 10 months to make one…
Your Substack subtitle is "I won't get to raise a family because of AGI". It should instead be "I don't want to raise a family because of AGI"
I think it's >90% likely that if you want and try to, you can raise a family in a relatively normal way (i.e. your wife gives birth to your biological children and you both look after them until they are adults) in your lifetime.
Not wanting to do this because those children will live in a world dissimilar to today's is another matter, but note that your parents also raised you to live in a world very dissimilar from the world they grew up in, but were motivated to do it anyway! So far, over many generations, people have been motivated to build families not by the confidence that their children will live in the same way as they did, but rather by other drives (whether it's a drive towards reproduction, love, curiosity, norm-following, etc.).
I also think you're very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don't see why either of those things stop you from having a family.
My 2c: “vibe-coded” software is still often low quality and buggy, and in this case the accusation of “slop!” is warranted. You can use AI to accelerate your coding >10x in many cases but if you over-delegate it’s not good (so far!).
Re. writing I think even pre-LLMs, LLM-like writing would be considered quite flawed by serious critics/stylists, but not by most people. Agree fear-mongering/hysteria about slopapocalypse is silly though.
Height is not zero sum. Being taller seems clearly all-else-equal better apart from the (significant) fact that it carries health side-effects (like cardio risk). Otherwise being taller means having a larger physical presence—all else equal you can lift more, reach more, see further. Like, surely it would be worse if everyone was half their current height!
Yeah that’s fair. But the lifestyle of ~$850 a month room in a group house isn’t that nice if you have many kids, and so it makes sense that people benefit from more money to afford a nicer place.
And like, sure, you can get by on less money than some people assume, but the original comment imo understates how much you and your family benefit from more money (e.g the use of “bewildered”).
I was thinking about US adults, but I’d guess it applies to LW readers and world adult population also.
69% of US adults say they have children, 15% do not but still want to (source)
The <1% comes from a combination of:
Very rough numbers would be p(superintelligence within 20 years) = 1%, p(superintelligence kills everyone within 100 years of being built) = 5%, though it's very hard to put numbers on such things while lacking info, so take this as gesturing at a general ballpark.
I haven't written much about (1). Some of it is intuition from working in the field and using AI a lot. (Edit: see this from Andrej Karpathy that gestures some of this intuition).
Re (2), I've written a couple relevant posts (post 1, post 2 - review of IABIED), though I'm somewhat dissatisfied with their level of completeness. The TLDR is that I'm very skeptical of appeals to coherence argument style reasoning, which is central to most misalignment-related doom stories (relevant discussion with Raemon).