A lot of the rationalist discourse around birthrates don't seem to square away with AGI predictions.
Like the most negative predictions of AGI destroying humanity in the next century or two leaves birthrates completely negligent as an issue. The positive predictions with AGI leave a high possibility of robotic child rearing and artificial wombs (when considering the amount of progress even us puny humans have already made) in the next century or two which also makes natural birthrates irrelevant because we could just make and raise more humans without the need of human parents to birth them.
And stuff like say, the concerns about old people consumption outnumbering youth worker production don't work well if one also believes that the upcoming AIs are going to make most work and labor obsolete to begin with. If pretty much everything is done by robots, then the issue of not enough young labor pretty much ceases to exist.
I don't know if this is just the result of hedging against the possibility that this sort of AGI doesn't happen, or if people just haven't thought it all through much.
A lot of the rationalist discourse around birthrates don't seem to square away with AGI predictions.
I think whenever I've seen people worry about birth rates, they either
Curious if you have counter-examples of people who think AGI soon and low birthrates are an issue.
Yes and I have a major example, one of the leading CEOs in the AI industry. He believes that AI will be more intelligent than all humans currently alive by 2030 while also saying birthrates are a top priority for all countries.
But birth rates aren't that bad that they're ending civilization anytime soon. Even if every nation was at South Korea levels in 200 years we'd still have >2 billion people. If he believes AI will be more intelligent than all humans combined in less than a decade, then why worry about something centuries out with easy solutions presented by the thing coming in just four years?
Yeah, fair enough. I don't count Musk as a rationalist rationalist. He's just very confused about anything that doesn't give you real-world feedback quickly. He's a weird case of a human who is exceptional in a ≤4-year time horizon and then has median human thinking abilities for anything beyond that. (Though noteworthy that he has taken no steps towards technical solutions for the birth rate issue, which, ya know, revealed preferences…)
If I were to steelman… hm. If every nation was at South Korea levels in 200 years we'd likely be back to pre-industrial revolution levels of technological & industrial development because lots of tacit knowledge that is required to keep civilization going is stored in people's heads, and can't be squeezed into fewer people, furthermore declining population leaves less slack for R&D because everyone is busy caring for mostly-nonproductive elders instead of innovating. This might plausibly start by 2050 or so, so unless one has fairly long AI timelines it is a non-issue.
I guess any AI pause that goes that far out has a similar issue, unless we allow for genetic engineering+exowombs to proliferate (and even then it feels like a toss-up to me, bracketing more AI progress).
A simple steelman is something like "if we're very wrong about A[G/S]I, then birth rates are a big issue, so we better invest some resources into it, in case we're wrong".
This steelman is a valid position to have, but is not good as a steelman in this context, because attributing this view to people like Musk is probably a great stretch (and probably also to other people the OP is referring to, but I'm not tracking that kind of stuff, so unsure).
I guess any AI pause that goes that far out has a similar issue
If the issue to be fixed is just population (growth), then yeah.
But if we're fine with a lower population[1] without the demographic collapse degrading everything the way it would degrade by default as a consequence, then a gradual development and adoption of "sub-AGI" AI could automate a bunch of stuff in a way that roughly catches up with the declining population. (Assuming you meant "AGI-ward progress pause", which maybe you didn't mean.)
I'm not in favor, but also wouldn't consider it a great tragedy if we can intervene out the consequences.
A simple steelman is something like "if we're very wrong about A[G/S]I, then birth rates are a big issue, so we better invest some resources into it, in case we're wrong".
This would be understandable if it weren't for the timelines here. Let's say AGI takes ~10x the amount of time (40 years instead of 4 years from the 2026 date) and the few billion people (which to note is just the population of the 1900s) happens in 100 years instead of 200, that would be 2066 vs 2126.
Despite being absurdly friendly on the timelines, it's still not even close! That suggests to me a very rocky confidence level being held about AGI actually happening, unless one believes that the superintelligence smarter than all humans put together wouldn't be able to help the birthrate.
Edit: Actually I'm messing up the math a little because I'm mixing up scenarios and hypotheticals. If the whole world had South Korea levels it would be much lower than a few billion in 100 years. But that's already the unrealistic worst case scenario, the world overall right now still has a positive replacement rare and estimates put around 10 billion people by 2084 which is still two decades past our order of magnitude AGI predictions.
Though noteworthy that he has taken no steps towards technical solutions for the birth rate issue
I mean, as individual men in the West go, having fourteen children is pretty above-average, and he seems to have gotten the process down to a science. He's not a Saudi oil baron with three digits of offspring, but he's certainly taken a Silicon Valley approach to it.
Pro-Natalists, in general, seem to take a 'lead-by-example' tack, which isn't horrible considering that it demonstrates an understanding of the consequences of materially encouraging people who wouldn't otherwise want kids to have them. I'd also say that none of the proposed policy approaches currently taken seriously by major governments have demonstrated much, if any, success, so "lead by example" would seem to be the default.
I wonder why he hasn't tried to clone himself. His younger twins would be likely to have similar priorities once they've grown up. Probably technical and legal hurdles.
Isn't the rumour that he has many IVF+embryo-selected kids with different women? (Is there a better source for this?)
Wiki only says:
Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.
Yes and I have a major example, one of the leading CEOs in the AI industry. He believes that AI will be more intelligent than all humans currently alive by 2030 while also saying birthrates are a top priority for all countries
Why pin this one (notably crazy-seeming) guy's take on "A lot of the rationalist discourse". He doesn't identify as a rationalist or post on LessWrong. And the rationalist discourse has long thought that his impact models about AI were bad and wrong (eg that founding OpenAI makes the situation dramatically worse, not better).
There's calculated rational "what if we're wrong" hedging, but then there's ... holding out hope? (I'm not claiming it's rational; I'm trying to articulate the psychology.) To conclude "AGI is coming, no point in having children" amounts to betting on death, giving up on believing in a human future. It kind of makes sense that an evolved creature would be inclined to cling to the belief in a future and live as if it were true despite evidence to the contrary; as irrationalities go, it's quite adaptive.
Who are you talking about? It seems to me that the the people who are majorly concerned about AGI destroying humanity are almost entirely disjoint from the people majorly concerned about falling fertility leading to a population collapse?
I definitely believe that there's some overlap, but not like more than 5% of either group.
I think it’s just people compartmentalizing, trying to hold onto normalcy by acting and even thinking as if the AI is not going to come and change everything soon.