Good to see your work on this! I'll avoid jumping in on weighing this relative to other problems as it's not the core of your post.
Rudolf and I are proponents of alignment to the user, which seems very similar to your second suggestion. Do you think there's a difference in the approach you outline vs the one we do? I'm considering doing a larger write-up on this approach, so your feedback would be helpful.
Following up with some resource curse literature that understands the problem as incentive misalignment:
On how state revenue sources shape institutional development and incentives, Karl (1997) writes,
"Thus the fate of oil-exporting countries must be understood in a context in which economies shape institutions and, in turn, are shaped by them. Specific modes of economic development, adapted in a concrete institutional setting, gradually transform political and social institutions in a manner that subsequently encourages or discourages productive outcomes. Because the causal arrow between economic development and institutional change constantly runs in both directions, the accumulated outcomes give form to divergent long-run national trajectories. Viewed in this vein, economic effects like the Dutch Disease become
We will soon live in the intelligence age. What you do with that information will determine your place in history.
The imminent arrival of AGI has pushed many to try to seize the levers of power as quickly as possible, leaping towards projects that, if successful, would comprehensively automate all work. There is a trillion-dollar arms race to see who can achieve such a capability first, with trillions more in gains to be won.
Yes, that means you’ll lose your... (read 572 more words →)
That's a choice, though. AGI could, for example, look like a powerful actor in its own right, with its own completely nonhuman drives and priorities, and a total disinterest in being directed in the sort of way you'd normally associate with a "resource".
My claim is that the incentives AGI creates are quite similar to the resource curse, not that it would literally behave like a resource. But:
If by "intent alignment" you mean AGIs or ASIs taking orders from humans, and presumably specifically the humans who "own" them, or are in charge of the "powerful actors", or form some human social elite, then it seems as though your concerns very much argue that
Could you elaborate on your last paragraph? Presuming a state overrides its economic incentives (ie establishes a robust post-AGI welfare system), I'd like to see how you think the selection pressures would take hold.
For what it's worth, I don't think "utopian communism" and/or a world without human agency are good. I concur with Rudolf entirely here -- those outcomes miss agency what has so far been an core part of the human experience. I want dynamism to exist, though I'm still working on if/how I think we could achieve that. I'll save that for a future post.
I appreciate this concern, but I disagree. An incognito google search of "intelligence curse" didn't yield anything using this phrase on the front page except for this LessWrong post. Adding quotes around it or searching for the full phrase ("the intelligence curse") showed this post as the first result.
A quick twitter search in recent shows the phrase "the intelligence curse" before this post:
In 24 tweets in total
With the most recent tweet on Dec 21, 2024
Before that, in a tweet from August 30, 2023
In 10 tweets since 2020
And all other mentions pre-2015
In short, I don't think this is a common phrase and expect that this would be the most understood usage.
I agree. To add an example: the US government's 2021 expanded child tax credit lifted 3.7 million children out of poverty, a near 50% reduction. Moreover, according to the NBER's initial assessment: "First, payments strongly reduced food insufficiency: the initial payments led to a 7.5 percentage point (25 percent) decline in food insufficiency among low-income households with children. Second, the effects on food insufficiency are concentrated among families with 2019 pre-tax incomes below $35,000".
Despite this, Congress failed to renew the program. Predictably, child poverty spiked the following year. I don't have an estimate for how many lives this cost, but it's greater than zero.
“Show me the incentive, and I’ll show you the outcome.” – Charlie Munger
Economists are used to modeling AI as an important tool, so they don’t get how it could make people irrelevant. Past technological revolutions have driven human potential further. The agrarian revolution birthed civilizations; the industrial revolution let us scale them.
But AGI looks a lot more like coal or oil than the plow, steam engine, or computer. Like those resources:
It will require immensely wealthy actors to discover and harness.
Control will be concentrated in the hands of a few players, mainly the labs that produce it and the states where they reside.
The states and companies that earn rents mostly or entirely from it won’t
Good to see your work on this! I'll avoid jumping in on weighing this relative to other problems as it's not the core of your post.
Rudolf and I are proponents of alignment to the user, which seems very similar to your second suggestion. Do you think there's a difference in the approach you outline vs the one we do? I'm considering doing a larger write-up on this approach, so your feedback would be helpful.