Thanks for the reply, it was helpful. I elaborated my perspective and pointed out some concrete disagreements with how labor automation would play out, I wonder if you can identify the cruxes in my model of how the economy and automated labor interact.
I'd frame my perspective as; "We should not aim to put society in a position where >90%+ of humans need government welfare programs or charity to survive while vast numbers of automated agents perform the labor that humans are currently depending on to survive." I don't believe we have the political wisdom or resilience to steer our world in this direction while preserving good outcomes for existing humans.
We live in a something like a unique balance where through companies, the economy provides individuals the opportunity to sustain themselves and specialize while contributing to a larger whole which typically provides goods and services which benefit other humans. If we create digital minds and robots to naively accelerate these emergent corporate entities' abilities to generate profit, we lose an important ingredient in this balance, human bargaining power. Further, if we had the ability to create and steer powerful digital minds (which is also contentious), it doesn't seem obvious that labor automation is a framing that would lead to positive experiences for humans or the minds.
I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
I'm skeptical that economic abundance driven by automated agents will by default manifest as an increased quality and quantity of goods and services enjoyed by humans, and that humans will continue to have the economic leverage to incentivize these human specific goods
working human-specific service jobs where consumers intrinsically prefer hiring human labor
I expect the amount of roles/tasks available where consumers prefer hiring humans is a rounding error compared to the amount of humans that depend on work
My moral objection to "AI takeover", both now and back then, applies primarily to scenarios where AIs suddenly seize power through unlawful or violent means, against the wishes of human society. I have, and had, far fewer objections to scenarios where AIs gradually gain power by obtaining legal rights and engaging in voluntary trade and cooperation with humans.
What about a scenario where no laws are broken, but over the course of months to years large numbers of humans are unable to provide for themselves as a consequence of purely legal and non violent actions by AIs? A toy example would be AIs purchasing land used for agriculture for other means (you might consider this an indirect form of violence).
It's a bit of a leading question, but
1. The way this is framed seems to have a profound reverence for laws and 20-21st century economic behavior
2. I'm struggling to picture how you envision the majority of humans will continue to provide for themselves economically in a world where we aren't on the critical path for cognitive labor (Some kind of UBI? Do you believe the economy will always allow for humans to participate and be compensated more than their physical needs in some way?)
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn't feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.