LESSWRONG
LW

1036
David Duvenaud
830Ω593340
Message
Dialogue
Subscribe

My website is https://www.cs.toronto.edu/~duvenaud/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Thoughts on Gradual Disempowerment
David Duvenaud18d10

Hmmm, maybe we got mixed somewhere along the way, because I was also trying to argue that humans won't keep more money than AI in the Malthusian limit!

Reply
Thoughts on Gradual Disempowerment
David Duvenaud22d10

I think it matters bc AIs won't be able to save any money. They'll spend all their wages renting compute to run themselves on. So it blocks problems that stem from AI having more disposal income and therefore weighing heavily on economic demand signals.

This doesn't make sense to me, and sounds like it proves too much - something like "Corporations can never grow because they'll spend all their revenue on expenses, which will be equal to revenue due to competition".    Sometimes AIs (or corporations) will earn more than their running costs, and invest those in growth, and end up with durable advantages due to things such as returns to scale or network effects.

Reply
Thoughts on Gradual Disempowerment
David Duvenaud22d10

I was responding to "ppl getting AIs to invest on their behalf, just like VCs invest on ppl's behalf today. It seems like we need fairly egregious misalignment for this to fail, no?"

I'm saying that one way that "humans live off index funds" fails, even today, is that it's illegal for almost every human to participate in many of the biggest wealth creation events.  You're right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.

Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo40

I'm hope it's not presumptuous to respond on Jan's behalf, but since he's on vacation:

> It's more than just index funds. It's ppl getting AIs to invest on their behalf, just like VCs invest on ppl's behalf today. It seems like we need fairly egregious misalignment for this to fail, no?

Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors.  Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?

> If income switches from wages to capital income, why does it become more load bearing?

Because the downside of a one-time theft is bounded if you can still make wages.  If I lose my savings but can still work, I don't starve.  If I'm a pensioner and I lose my pension, maybe I do starve.

> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.

Why do humans already farm clickbait?  It seems like you think many humans wouldn't direct their AIs to make them money / influence by whatever means necessary.  And it won't necessarily be individual humans running these AIs, it'll be humans who own shares of companies such as "Clickbait Spam-maxxing Twitter AI bot corp", competing to produce the clickbaitiest content.

Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo21

Oh, makes sense.  Kind of like Yudkowsky's arguments about how you don't know how a chess master will beat you, just that they will.  We also can't predict exactly how a civilization will disempower its least productive and sophisticated members.  But a fool and his money are soon parted, except under controlled circumstances.

Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo30

Thanks for the detailed feedback, argumentation, and criticism!

Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo20

There’s still a real puzzle about why Xi/Trump/CEOs can’t coordinate here after they realise what’s happening. 

  • Maybe it’s unclear even to superintelligent AIs where this will lead, but it in fact leads to disempowerment. Or maybe the AIs aren’t aligned enough to tell us it’s bad for us.

I agree that having truthful, aligned AGI advisors might be sufficient to avoid coordination failures.  But then again, why do current political leaders regularly appoint or listen to bad advisors?  Steve Byrnes had a great list of examples of this pattern, which he calls "conservation of wisdom"

 

Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo20

why not deploy aligned AI that makes as much money as possible and then uses it for your interests?  maybe the successionism means ppl choose not to? (Seems weak!)


For the non-rich, one way or another, they'll quickly end up back in Malthusian competition with beings that are more productive, and have much more reproductive flexibility than them.

For the oligarchs / states, as long as human reproduction remained slow, they could easily use a small amount of their fortunes to keep humanity alive.  But there are so many possible forms of successionism, that I expect at least one of them to be more appealing to a given oligarch / government than letting humans-as-they-are continue to consume substantial physical resources.  E.g.:

  1. Allow total reproductive freedom, which ends up Goodhearting whatever UBI / welfare system is in existence with "spam humans", e.g. just-viable frozen embryos with uploaded / AI brains legally attached.
  2. Some sort of "greatest hits of humanity" sim that replays human qualia involved in their greatest achievements, best days, etc.,  Or, support some new race of AGIs that are fine-tuned to simulate the very best of humanity (according to the state).
  3. Force everyone to upload to save money, and also to police / abolish extreme suffering.  Then selection effects turns the remaining humans into full-time activists / investors / whatever the government or oligarchs choose to reward.  (This also might be what a good end looks like if done well enough.)
Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo10

I buy you could get radical cultural changes. [...] But stuff as big as in this story feels unlikely. Often culture changes radically bc the older generation dies off, but that won’t happen here.


Good point, but imo old peoples' influence mostly wanes well before they die, as they become unemployed, out-of-touch, and isolated from the levers of cultural production and power.  Which is what we're saying will happen to almost all humans, too.

Another way that culture changes radically is through mass immigration, which will also effectively happen as people spend more time interacting with effectively more-numerous AIs.

Reply
Thoughts on Gradual Disempowerment
David Duvenaud1mo10


> If people remained economically indispensable, even fairly serious misalignment could have non catastrophic outcomes.

Good point.  Relatedly, even the most terribly misaligned governments mostly haven't starved or killed a large fraction of their citizens.  In this sense, we already survive misaligned superintelligence on a regular basis.  But only when, as you say, people remain economically indispensable.

> Someone I was explaining it to described it as “indefinite pessimism”.

I think this is a fair criticism, in the sense that it's not clear what could make us happy about the long-term future even in principle.  But to me, this is just what being long-term agentic looks like!  I don't understand why so many otherwise-agentic people I know seem content to YOLO it post-AGI, or seem to be reassured that "the AGI will figure it out for us".

Reply
Load More
96Summary of our Workshop on Post-AGI Outcomes
18d
3
25Upcoming workshop on Post-AGI Civilizational Equilibria
3mo
0
167Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development
Ω
8mo
Ω
65
95Sabotage Evaluations for Frontier Models
Ω
11mo
Ω
56
133Simple probes can catch sleeper agents
Ω
1y
Ω
21
305Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Ω
2y
Ω
95