My understanding goes along similar lines, so I'm not highly doubtful. If anything, I've had the idea that the risk of developmental disorders and miscarriage, difficulties in getting pregnant and some pregnancy related issues might begin rising substantially much sooner than in one's 30s.
To me it seems that the overwhelming majority of children conceived even after 35 are all healthy and fine. That is, >99% on autism, >98% on chromosome disorders. The risk of miscarriage is relevant. All these considered, I believe this evidence means people should likely not be too worried whether they are already too old to have kids.
Whether or not having kids earlier might still be better, while accounting for the costs on one's career or business, etc. is another discussion, particularly when thinking of large numbers of people. However, AFAIK a lot of people already want to conceive while they are young, and I'm not sure whether people considering trying kids can significantly be swayed one way or another by this evidence alone.
(comment edited: missed the link at first sight)
Thanks for the post. A layperson here, little to no technical knowledge, no high-g-mathematical-knowitall-superpowers. I highly appreciate this forum and the abilities of the people writing here. Differences in opinion are likely due to me misunderstanding something.
As for examples or thought experiments on specific mechanisms behind humanity losing a war against an AI or several AIs cooperating, I often find them too specific or unnecessarily complicated. I understand the point is simply to point out that a vast number of possible, and likely easy ways to wipe out humanity (or to otherwise make sure humanity won't resist) exists, but I'd still like to see more of the claimed simple, boring, mundane ways of this happening than this post includes. Such as:
Another example, including killer robots:
I think one successful example of pointing to AI risk without writing fiction, was Eliezer musing the possibility that AI systems might, due to some process of self-improvement, end up behaving in unexpected ways so that they are still able to communicate with one another but unable to communicate with humanity.
My point is that providing detailed examples of AIs exterminating humanity via nanobots, viruses, highly advanced psychological warfare et cetera might serve to further alienate those who do not already believe in the possibility of them being able to or willing to do so. I think that pointing to the general vulnerabilities of the global human techno-industrial societies would suffice.
Let me emphasize that I don't think the examples provided in the post are necessarily unlikely to happen or that what I've outlined above should somehow be more likely. I do think that global production as it exists today seems quite vulnerable to even relatively slight pertubations (such as a coronavirus pandemic or some wars being fought), and that by simply nudging these vulnerabilities might suffice to quickly end any threat humanity could pose to an AI:s goals. Such a nudge might also be possible and even increasingly likely due to wide AI implementation, even without an agent-like Singleton.
A relative pro on focusing on such risks might be the view that humanity does not need a godlike singleton to be existentially, catastrophically f-d, and that even relatively capable AGI systems severely risk putting an end to civilization, without anything going foom. Such events might be even more likely than nanobots and paperclips, so to say. Consistently emphasizing these aspects might convince more people to wary of unrestricted AI development and implementation.
Edit: It's possibly relevant that I relate to Paul's views re: slow vs. fast takeoff insofar as I find slow takeoff likely to happen before fast takeoff.