I think it matters bc AIs won't be able to save any money. They'll spend all their wages renting compute to run themselves on. So it blocks problems that stem from AI having more disposal income and therefore weighing heavily on economic demand signals.
This doesn't make sense to me, and sounds like it proves too much - something like "Corporations can never grow because they'll spend all their revenue on expenses, which will be equal to revenue due to competition". Sometimes AIs (or corporations) will earn more than their running costs, and invest those in growth, and end up with durable advantages due to things such as returns to scale or network effects.
I was responding to "ppl getting AIs to invest on their behalf, just like VCs invest on ppl's behalf today. It seems like we need fairly egregious misalignment for this to fail, no?"
I'm saying that one way that "humans live off index funds" fails, even today, is that it's illegal for almost every human to participate in many of the biggest wealth creation events. You're right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
I'm hope it's not presumptuous to respond on Jan's behalf, but since he's on vacation:
> It's more than just index funds. It's ppl getting AIs to invest on their behalf, just like VCs invest on ppl's behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
> If income switches from wages to capital income, why does it become more load bearing?
Because the downside of a one-time theft is bounded if you can still make wages. If I lose my savings but can still work, I don't starve. If I'm a pensioner and I lose my pension, maybe I do starve.
> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Why do humans already farm clickbait? It seems like you think many humans wouldn't direct their AIs to make them money / influence by whatever means necessary. And it won't necessarily be individual humans running these AIs, it'll be humans who own shares of companies such as "Clickbait Spam-maxxing Twitter AI bot corp", competing to produce the clickbaitiest content.
Oh, makes sense. Kind of like Yudkowsky's arguments about how you don't know how a chess master will beat you, just that they will. We also can't predict exactly how a civilization will disempower its least productive and sophisticated members. But a fool and his money are soon parted, except under controlled circumstances.
Thanks for the detailed feedback, argumentation, and criticism!
There’s still a real puzzle about why Xi/Trump/CEOs can’t coordinate here after they realise what’s happening.
- Maybe it’s unclear even to superintelligent AIs where this will lead, but it in fact leads to disempowerment. Or maybe the AIs aren’t aligned enough to tell us it’s bad for us.
I agree that having truthful, aligned AGI advisors might be sufficient to avoid coordination failures. But then again, why do current political leaders regularly appoint or listen to bad advisors? Steve Byrnes had a great list of examples of this pattern, which he calls "conservation of wisdom"
why not deploy aligned AI that makes as much money as possible and then uses it for your interests? maybe the successionism means ppl choose not to? (Seems weak!)
For the non-rich, one way or another, they'll quickly end up back in Malthusian competition with beings that are more productive, and have much more reproductive flexibility than them.
For the oligarchs / states, as long as human reproduction remained slow, they could easily use a small amount of their fortunes to keep humanity alive. But there are so many possible forms of successionism, that I expect at least one of them to be more appealing to a given oligarch / government than letting humans-as-they-are continue to consume substantial physical resources. E.g.:
I buy you could get radical cultural changes. [...] But stuff as big as in this story feels unlikely. Often culture changes radically bc the older generation dies off, but that won’t happen here.
Good point, but imo old peoples' influence mostly wanes well before they die, as they become unemployed, out-of-touch, and isolated from the levers of cultural production and power. Which is what we're saying will happen to almost all humans, too.
Another way that culture changes radically is through mass immigration, which will also effectively happen as people spend more time interacting with effectively more-numerous AIs.
> If people remained economically indispensable, even fairly serious misalignment could have non catastrophic outcomes.
Good point. Relatedly, even the most terribly misaligned governments mostly haven't starved or killed a large fraction of their citizens. In this sense, we already survive misaligned superintelligence on a regular basis. But only when, as you say, people remain economically indispensable.
> Someone I was explaining it to described it as “indefinite pessimism”.
I think this is a fair criticism, in the sense that it's not clear what could make us happy about the long-term future even in principle. But to me, this is just what being long-term agentic looks like! I don't understand why so many otherwise-agentic people I know seem content to YOLO it post-AGI, or seem to be reassured that "the AGI will figure it out for us".
Hmmm, maybe we got mixed somewhere along the way, because I was also trying to argue that humans won't keep more money than AI in the Malthusian limit!