It seems to me that the greatest winners in real life are the people who spend their actual effort on getting as large part of the pie as possible, while convincing everyone around them that the virtuous thing is growing the pie. Imagine someone like Sam Altman -- convince lots of smart people with technical skills to create an "open" AI to benefit the entire humanity... then stab them in the back and make the AI company closed and profit-oriented.
In my experience, this seems more like a rule than an exception. This is how people get to the top. You need to talk a lot about growing the pie... if you can't inspire enough people to do it, the pie won't grow large enough. But while everyone around you is busy growing the pie, you set up the mechanism that will allow you to take it all.
Now, we have some mechanisms to prevent this kind of traps. There are free software licenses, which prevent the project leader from simply kicking out the developers after they have completed their work, and capturing the long-term value. There are cooperatives, which prevent the boss from capturing the long-term value of the company and kicking out the early employees who burned out working for him. But of course, the people who plan to capture the value will try to discourage others from using these solutions. "Just trust me, bro."
I mean, I am totally in favor of cooperation, but optimism alone is not enough. Sometimes, if you spend 5 seconds -- not even minutes -- thinking about it, you can predict who will get the pie and how, because it is often trivial. It typically only requires them to say "I don't need you anymore" after the pie is ready.
Please don't be exploited by Sam Altman. Also, please don't be Sam Altman. You should not be Sam Altman.
Okay, I'll try.
But I am mostly thinking about how powerful meta move is it to invent mechanisms that prevent pies from being confiscated. Something like GNU GPL that helped create the entire ecosystem of free software. I can easily imagine a parallel universe where this license does not exist, and people not only can't imagine it, but many of them signal cleverness by economical arguments why something like this is impossible in principle.
What other anti-pie-grabbing mechanisms exist in parallel universes but not in ours?
Recently some friends and I were comparing growing the pie interventions to an increasing our friends' share of the pie intervention, and at first we mostly missed some general considerations against the latter type.
1. Decision-theoretic considerations
—Christiano
I think realizing that your behavior is correlated with that of aliens in other universes, in part via you being in their simulation, makes this consideration even stronger. Overall I don't know how strong it is but it might be very strong.
2. Pragmatic considerations
—Christiano
3. Worlds where many people tend to converge (upon reflection) are higher-stakes (under some views).
I care about the long-term future more in worlds where my moral convictions (upon reflection) are more real and convergent. In such worlds, many humans will converge with me upon reflection; the crucial things are averting AI takeover,[1] ensuring good reflection occurs, etc. rather than marginally increasing my faction's already-large share of the lightcone.
Inspired by MacAskill and Moorhouse.
4. Others' considered values matter directly (under some views).
—Christiano
5. You might be wrong.
It's optimistic to assume that you or your friends will use power perfectly in the future. You should probably think of empowering yourself or your friends like empowering your (altruistic) epistemic peers, who may continue to disagree on important stuff in the future rather than empowering the champions of truth and goodness.
Disclaimers
None of these points are novel. This post was inspired by MacAskill, which also makes most of these points non-originally.
Growing the pie doesn't just mean preventing AI takeover. For example, research on metaphilosophy, acausal considerations, decision theory, and axiology grows the pie, as do interventions to prevent human takeover, create/protect deliberative processes, promote good reflection, and solve coordination problems.
It may be correct to allocate some resources to claiming your share of the pie. You have a moral obligation not to be eaten[2] and you should probably at least do tit-for-tat/reciprocity, if relevant. I just think there are some subtle considerations against powerseeking.
Powerseeking for the MCUF[3] or something (rather than your personal preferences) dissolves #1, most of #4, some of #5, little of #2, and none of #3, I weakly think.
This post is part of my sequence inspired by my prioritization research and donation advising work.
Unless the paperclippers would also converge with me! But other humans and aliens seem more likely to converge than AIs that take over. But I think there's also some (perhaps weak) cooperation considerations with misaligned AIs; this entails upweighting e.g. ensuring good reflection and avoiding metastable vacuum decay relative to preventing AI takeover.
Or in this case an obligation not to be so edible that you incentivize people-eating.
"Multiverse-wide Compromise Utility Function." The acausal people use this term; unfortunately it hasn't been publicly introduced; see Nguyen and Aldred.