Please tell me if I'm understanding this correctly. The main arguments are:
From these two arguments the assumption is that AI will have an incentive to keep human population numbers low or at zero. If my understanding to this point is correct, what do you believe humans now should do knowing these assumptions?
I don't agree that the ChatGPT list is less actionable, notice that all 6 scenarios listed are designed so that humans wouldn't be aware of what was happening while slowly ceding control.
For my own students (I'm getting certified to teach high school business courses) I recommend Sophia learning (https://www.sophia.org/) to get a head start on gen ed college courses.
I can imagine that building as a solution to low fertility could pick up steam in the coming years in terms of rhetoric but all the same barriers will likely still be in place (NIMBYism, lobbying by landlords, status quo bias etc.)
- Money.
- Insecurity about money.
- Not being able to afford kids or the house to raise them in.
My gut reaction is that these are more perceptions than real obstacles. There's a strong perception that there's a certain dollar amount or level of wealth before one should have children. Somehow changing the perception first would probably help fertility more than simply paying per child.
"Home is where the heart is."
I thought this meant something like home is where longing is (your metaphorical heart), the place that you yearn for the most. Now I think it may simply mean that home is wherever your physical beating heart is. The message behind it being that you can adapt to feel at home most anywhere.
"Breathtaking."
I thought this was just an expression to explain natural beauty but I actually felt the breath leave me when I was young from suddenly seeing a sweeping vista of mountains and forest while riding on a bus when I was a teen.
Children of Men (2006) comes to mind: a movie about a small group of people in a dying world who have the means to benefit humanity and provide hope for the future but can't agree on next steps. (The story is more nuanced but these bits seem relevant to rationality).
Maybe there's a way to hedge against P(doom) by investing in human prosperity and proliferation while discouraging large leaps in tech. Maybe your money should go towards encouraging or financing low tech high fertility communities?
Part of what the question is asking is how do we know (decide?) the difference between performative and true existential threat.