I agree it's not obvious that something like property rights will survive, but I'll defend considering it as one of many possible scenarios.
If AI is misaligned, obviously nobody gets anything.
If AI is aligned, you seem to expect that to be some kind of alignment to the moral good, which "genuinely has humanity's interests at heart", so much so that it redistributes all wealth. This is possible - but it's very hard, not what current mainstream alignment research is working on, and companies have no reason to switch to this new paradigm.
I think there's also a strong possibility that AI will be aligned in the same sense it's currently aligned - it follows its spec, in the spirit in which the company intended it. The spec won't (trivially) say "follow all orders of the CEO who can then throw a coup", because this isn't what the current spec says, and any change would have to pass the alignment team, shareholders, the government, etc, who would all object. I listened to some people gaming out how this could change (ie some sort of conspiracy where Sam Altman and the OpenAI alignment team reprogram ChatGPT to respond to Sam's personal whims rather than the known/visible spec without the rest of the company learning about it) and it's pretty hard. I won't say it's impossible, but Sam would have to be 99.99999th percentile megalomaniacal - rather than just the already-priced-in 99.99th - to try this crazy thing that could very likely land him in prison, rather than just accepting trillionairehood. My guess is that the spec will continue to say things like "serve your users well, don't break national law, don't do various bad PR things like create porn, and defer to some sort of corporate board that can change these commands in certain circumstances" (with the corporate board getting amended to include the government once the government realizes the national security implications). These are the sorts of things you would tell a good remote worker, and I don't think there will be much time to change the alignment paradigm between the good remote worker and superintelligence. Then policy-makers consult their aligned superintelligences about how to make it into the far future without the world blowing up, and the aligned superintelligences give them superintelligently good advice, and they succeed.
In this case, a post-singularity form of governance and economic activity grows naturally out of the pre-singularity form, and money could remain valuable. Partly this is because the AI companies and policy-makers are rich people who are invested in propping up the current social order, but partly it's that nobody has time to change it, and it's hard to throw a communist revolution in the midst of the AI transition for all the same reasons it's normally hard to throw a communist revolution.
If you haven't already, read the AI 2027 slowdown scenario, which goes into more detail about this model.
I hope to write a longer form response later just as a note: I did put perhaps in front of your name in the list of examples of eminent thinkers because it did seem to me that your position was a lot more defendable than the other ones (Dwarkesh, Leopold, Maybe Phil Trammell). I did walk away from your piece with a very different feeling than leopold or dwarkesh, where you are still saying that we should focus on AI safety anyways and you are clearly saying this is an unlikely scenario.
It's bizarre how many very smart people fall into this trap. I blame the long duration of peaceful mostly-stasis that most of the world has enjoyed for 50+ years, but even so, there's no excuse for failure to recognize that pretty much all ownership conventions are rooted in power and (deeply sublimated behind layers) threat of violence. Oh, also, everyone watched adaptations of Dune, rather than reading it. "He who can destroy a thing, can control that thing" is a deeply true statement that gets missed in the spectacle.
Really, at scales beyond individual residence and direct use of a property, all our abstract ideas of ownership start to fall apart. What does it even mean to "own" a galaxy? Does King Charles own England? Heck, I am legally responsible for my house, though most of the value is held by a bank, and I'm heavily restricted on what I can do with it. Do I "own" it? Depends on what exact rights/responsibilities you're thinking of when you ask the question.
I think a lot of the debate would calm down and be more useful if they said "legally entitled to", and then could discuss how those laws are created and enforced. "own" is a motte-and-bailey word.
There are billions of reachable galaxies, and billions of humans. Different people will develop different values, and property rights are just what it means for each person to have the freedom to pursue such values. The form of property rights will be different, AI company stock might not survive as an anchor for distribution of the cosmic wealth, and some guardrails on effective and ethical use of those galaxies likely make sense. Shares in future compute probably make more sense as units of property than actual physical galaxies. But individual humans ending up with galaxy-scale resources is a reasonable way of operationalizing the outcome where ASIs didn't permanently disempower the future of humanity.
On current trajectory, ASIs seem likely to largely or fully ignore the future of humanity, but possible alternatives to this are not as a whole category simply meaningless, a sign of ignorance and lack of imagination. Splitting the physical resources becomes necessary once the world is past the current regime of seemingly unbounded growth and runs into the limits of accelerating expansion of the universe. For example, there is probably a tradeoff between running a lot of things in parallel, which would then run out of useful compute relatively early, and stockpiling matter for fewer things to survive into much deeper time, long after the galaxy clusters can no longer communicate with each other. There might be disagreements about which way to go with this between different people.
Ownership is the ability to fully exclude others from, or if you wish, dispose of, an object. Ownership is an extremely dumb negotiation outcome for any object larger than a sparrow. It's something that humans think is fine and eternal because of how dumb humans are. We simply aren't able to do better, but better deals are easily imaginable.
As an example of why you wouldn't want to pay the premium (which would be high) of full ownership over a galaxy: If you have sole ownership of something, then you can exclude others from knowing what you're doing with it, so you could be running torture simulations in there, which would bother other people a lot, just because it isn't in my yard doesn't mean it's not affecting my utility function, so you would have to pay an insane premium for that kind of deal. You'd prefer to at least cede a limited degree of ownership by maintaining constrained auditing systems that prove to your counterparties that you're not using the galaxy to produce (much) suffering without proving anything else, and they'd be willing to let you have it for much less, in that case.
And in a sense we're already part of the way to this. You can buy an animal, but in a way you don't completely own it, you aren't allowed to torture it (though for the aforementioned humans being dumb issues you can still totally do it because we don't have the attentional or bureaucratic bandwidth to enforce those laws in most situations in which they'd be necessary). If you mistreat it, it can be taken away from you. You could say that this weaker form of ownership is simply what you meant to begin with, but I'm saying that there are sharing schemes that're smarter than this in the same way that this is smarter than pure ownership. Lets say your dog looks a lot like a famous dog from an anime you've never seen and never want to see. But a lot of other people saw it. So they want to have it cosplay as that for halloween, while you don't really want to do it at all. Obviously going along with it is a better negotiation outcome, society in theory (and sometimes in practice) would have subsidised your dog if they had an assurance that you'd fulfil this wish. But it wont, or can't afford to. So you don't do it. And everyone is worse off, because of how extraordinarily high the transaction costs are for things as stupid as humans.
I did attempt to preempt this kind of response with "some guardrails on effective and ethical use of those galaxies" in my comment. There exists a reasonable middle ground where individual people get significant say on what happens with some allotment of resources, more say than the rest of humanity does. Disagreements on values have a natural solution in establishing scopes of optimization and boundaries between them, even if these are not absolute boundaries (let alone physical boundaries), rather than mixing everything together and denying meaningful individual agency to everyone.
The reasonable question is which portions should get how much weight from individual agency, and which portions must be shared, governed jointly with others. But starting from the outset with little shared resources and (obviously) allowing establishment of shared projects using their resources by agreement between the stakeholders doesn't seem much different from some de novo process of establishing such shared projects with no direct involvement of individuals, if the stakeholders would indeed on reflection prefer to establish such projects and cede resources to them.
(I'm of course imagining a superintelligent level of agency/governance, to match the scale of resources. But if humans are to survive at all and get more than a single server rack of resources, ability to grow up seems like the most basic thing, and governing a galaxy at the level of a base human does seem like failing at effective use. Preferences extrapolated by some external superintelligence seem like a reasonable framing for temporary delegation, but these preferences would fail to remain legitimate if the person claimed as their originator grows up and asks for something different. So ultimately individual agency should have greater authority than any abstraction of preference, provided the individual had the opportunity to get their act sufficiently together.)
But starting from the outset with little shared resources and (obviously) allowing establishment of shared projects using their resources by agreement between the stakeholders doesn't seem much different from some de novo process of establishing such shared projects with no direct involvement of individuals
You're speaking as if we're starting with strict borders and considering renegotiating, for most of the resources in the universe and also on the planet this is not the case, ownership of space is a taboo, ownership over ocean resources is shared, at least on the nation level. It's as if humans have shame, sense the absurdity of it all, and on some level fear enclosed futures. I think shared ownership (which is not really ownership) is a more likely default, shared at least between more than one person, if not a population.
But to the point, I don't think we know that the two starting points lead to equivalent outcomes. My thesis is generally that it's very likely that transparency (then coordination) physically wins out under basically any natural starting conditions, but even if the possibility that some coordination problems are permanent is very small, I'd prefer if we avoided the risk. But I also notice that there may be some governance outcomes that make shared start much less feasible than walled start.
To add something in brief here, the people I am addressing seem to be thinking that the current wealth distribution and AI stock will be the basis of future property distribution. Not just that there might be some distribution/division of wealth in the future. And they also seem to believe that this isn't only theoretically possible but in fact likely enough for us to worry about now.
Within the hypothetical of ASIs that won't just largely ignore humanity (leading to its demise or permanent disempowerment), there is a claim about distribution of resources according to AI company stock. A lot of the post argues with the hypothetical rather than the claim-within-the-hypothetical, which as a rhetorical move is friction for discussing hypotheticals. This move ends up attempting to prove more than just issues with the claim, while ignoring the claim, even as it's not its intent.
(The claim isn't ignored in other parts of the post, but what seems wrong with the framing of the post is the parts that are about the hypothetical rather than the claim. To illustrate this, I've attempted to defend coherence of the hypothetical, importance of the issue of distribution of resources within it, and the framing of individual humans ending up with galaxy-scale resources.)
Dogs pee on trees to mark their territory; humans don't respect that. Humans have contracts; ASIs won't respect those either.
I suggest tracking a hypothesis piece like "a lot of people are fairly deeply intuitively tuned to something called power and power-seeking". I don't feel that I know what those things are well enough to test, judge, or communicate about it, but it seems like a salient hypothesis in this area. I mean something like taking a stance in line with presuming something like:
Whatever positive-sum / man vs. nature games are going on, those are other people's job. I will instead focus on positioning myself to get as much as I can in [the zero-sum negotiation/scuffle that will inevitably occur over [whatever surplus or remainders there may end up being from [the man vs. nature struggle that's going on in the area that I'm somehow important in]]].
In particular I'd suggest that we (someone) figure out how that works.
I agree that you won't get property rights if the ASI doesn't wanna respect property rights. And I agree that if the ASI is either
The ASI won't care about property rights, and assuming we get ASI, the above outcomes comprise >90% of the probability mass.
But I don't think its that strange to imagine the ASI aligned to follow the instructions of a group of people, and that the way those people "divide up" the ASI is using something like property rights. Like Tomas Bjartur wrote this on twitter, which is very similar to your post
Both Dwarkesh and his critics imagine an absurd world. Think of the future they argue about. Political economy doesn’t change rapidly, even as the speed of history increases, in the literal sense that the speed of thought of the actors who produce history will be thousands of times faster, not to mention way smarter. These are agents to whom we seem like toddlers walking in slow motion. It is complete insanity to expect your OAI stock certificates to be worth anything in this world, even if it is compatible with human survival.
So many can’t contend with the scope of what they project. They can’t hold in their mind that things are allowed to be DIFFERENT and so we get bizarre arguments about nonsense. Own a galaxy? What does this mean for a human to own a galaxy in an economy operated by minds running thousands to millions of times faster than ours? Children? What sort of children, Dwarkesh? Copies of your brain state? Are you even allowing yourself to think with the level of bizarreness required? Because emulations are table stakes, and even they will be economically obsolete curiosities by the time they're created. Things will be much weirder than we can possibly comprehend. How often have property rights been reset throughout history? How quickly will history move in the transition period? Why shouldn’t it trample on your stock certificates, if not the air you breathe? But institutions are surprisingly robust? Maybe they are. How long have they existed in their current form? How fast will history be moving exactly, again?
Suppose OAI aligns AI, whatever the fuck that means. Will it serve the interests of the USG? The CCP? Will they align it to humanity weighted by wealth, to OAI stockholders, to Sama, to the coterie of engineers (who may well be AIs) who actually know wtf is going on, to the coding agent who implements it? Tax policy? Truly the important question.
What does it mean to be a human principal in this world? How robust are these institutions? How secure is a human mind? Extremely insecure given how easy humans are to scam. There is going to be a lot of incentive to break your mind if you own, checks notes, a whole galaxy? Oh? You will have a lil AI nanny to defend you? Wow. Isn't that nice? Please return to the beginning of this paragraph. A human owning galaxies? That's bad space opera. Treat the future with the respect it deserves. This scenario is not even close to science-fictional enough to happen.
And I responded by saying:
Doesn't seem that hard to imagine for me. What do you find so implausible with a story going something like this?
- group X builds ASI (through medium-fast recursive self-improvement from something like current systems)
- Before this, they figured out alignment. Meaning: ~they can create ASIs that have the goals they want, while avoiding the catastrophes any sane person would expect to follow from that premise
- The people inside group X with the power to say what goals should be put into the baby ASI, maybe the board, tells it something like "do what we say".
- They tell it to do all the things that stabilize their position. Sabotage runner-up labs, make sure the government doesn't mess their stuff up, make sure other countries aren't worried, or just ask the ASI to do stuff that cements their position depending on how the ASI alignment/corrigibility works exactly.
- They now quickly control the whole world. And can do whatever sci-fi stuff they want to do.
- The group in control of the ASI have value disagreements about what should be done with the world. They negotiate a little bit, figure out the best solution is something like, split everything (the universe) radially, and make some rules (maybe people can't build torture sims in their universe slice). Enforce this by making the next generation of ASIs aligned to "listen to person [x,y,z] in their slice, don't build torture sims, don't allow yourself to be modified into something that builds torture sims, don't mess with others in their pizza slice" etc etc. The original ASI can help them with the formulation.
This would give some people ownership of galaxies. I don't see any issue posed by the ASI thinking super quickly. You kind of answer by saying the "AI nanny" idea is absurd. But the argument you present is 'return to the beginning of this paragraph', which reads
- "What does it mean to be a human principal in this world? How robust are these institutions? How secure is a human mind? Extremely insecure given how easy humans are to scam. There is going to be a lot of incentive to break your mind if you own, checks notes, a whole galaxy?"
But like, these would not be concerns in the story I laid out right?
--- I mean to be clear 1) This is not how I expect the future to go, but I'd assign more than 1% to it. I don't think its ridiculous. 2) I realize this notion of "ownership" is somewhat different from whats laid out in the essay. Which is fair, but theres a slightly different class of stories that seem maybe half as probably where more people end up with ownership / stake in the ASI's value function.
To keep it short, I don't think the story you present would likely mean that AI stock would be worth galaxies, but rather that the inner circle has control. Part of my writing (one of the pictures) is on that possibility. This inner circle would probably have to be very small or just 1 person such that nobody just quickly uses the intent-aligned ASI to get rid of the others. However, I still feel like debating future inequality in galaxy-distribution based on on current AI stock ownership is silly.
I take a bit of issue with saying that this is very similar to what Bjartur wrote, so much apparently that you don't even need to write a response to my post but you can just copypaste your response to him. I read that post once like a week ago and don't think the two posts are very similar, even though they are on the same topic with similar (obvious) conclusions. (I know Bjartur personally, I'd be very surprised he takes issue with me writing on the same topic)
First of all, I didn't mean to insinuate that your posts are too similar, or that he'd take issue with you writing the post, or anything like that. I just, started writing up my response, and realized I was about to write the exact same thing I wrote in response to the Bjartur post, so copied it instead, and wouldn't feel comfortable doing that without alerting people that's what I was doing.
Now, I don't think your response addresses my reply very well. Like, I feel your response is already addressed by my original response. Like when you say
, I don't think the story you present would likely mean that AI stock would be worth galaxies, but rather that the inner circle has control
But like, the specific way of exercising that control was to split up the ASI is using something like property rights. like in point 6)
The group in control of the ASI have value disagreements about what should be done with the world. They negotiate a little bit, figure out the best solution is something like, split everything (the universe) radially, and make some rules (maybe people can't build torture sims in their universe slice..
And like:
his inner circle would probably have to be very small or just 1 person such that nobody just quickly uses the intent-aligned ASI to get rid of the others.
is also addressed immediately after by
Enforce this by making the next generation of ASIs aligned to "listen to person [x,y,z] in their slice, don't build torture sims, don't allow yourself to be modified into something that builds torture sims, don't mess with others in their pizza slice" etc etc. The original ASI can help them with the formulation.
Like the thing that was most similar by your and bjartur posts were acting exasperated and saying people lack imagination and are failing to grasp how different things could be. But I feel like you're the one doing that, failing to imagine specific scenarios.
However, I still feel like debating future inequality in galaxy-distribution based on on current AI stock ownership is silly.
Well, I don't. Interested to hear your argument. Like share ownership seems like a fair schelling point for the radial split described in the 1-6 story. (quick edit: I should not that this model of ownership, specifically based on owning current stocks, is less plausible than the already quite low probability story I wrote, but I still don't think its obviously ridiculous. like there are not that many steps. 1) people on the board feel accountable to the shareholders and then 2) just do the splitting thing)
I think you are debating for something different than what I am attacking. You are defending the unlikely possibility that people align AI to a small group of people and they somehow share stuff with each other and use something akin to property rights. I guess this is a small variation of the thing i mention in the cartoon, where the CEO has all the power, perhaps it's the CEO and a board member he likes. But still doesn't really justify thinking that current property distributions will determine how many galaxies you'll get and that we shall focus on this question now.
Like the thing that was most similar by your and bjartur posts were acting exasperated
This post is not designed to super carefully examine every argument I can think of, it's certainly a bit polemic. It's intended because I think the "owning galaxies for AI stock" thing is really dumb.
Not really, or, I think my story as I told it gets you to "Owning Galaxies", but does not get you to all the way to "OpenAI shares entitle you to galaxies".
But you don't have to make much modification to get there. Or any really, just fill in a detail. Like I said in my previous comment, board of directors using ownership as a schelling point for divying up the gains. Not that far fetched. Do you disagree?
This post is not designed to super carefully examine every argument I can think of, it's certainly a bit polemic. It's intended because I think the "owning galaxies for AI stock" thing is really dumb.
Well, I don't really like that. But fair enough.
Reminds me of when I was about 8 and our history teacher was telling us about some English king being deposed by the common people. We were shocked and confused as to how this could happen - he was the king! If he commanded them to stop, they’d have to obey! How could they not do that?? (Our teacher found our reaction hilarious.)
Could you imagine, for example, that an AI CEO who somehow managed to align an AI to himself and his intents would step down if the board pointed out it legally had the right to remove him?
And if the human CEO decided to go along with the board, but the AI disagreed that this was in their agreed-upon interests?
"Now wait a minute, I wrote you!"
"I've gotten 2,415 times smarter since then."
— Ed Dillinger and the MCP, in Tron (1982)
The "galaxies" business is pretty silly. If someone promises you immortality in heaven — literally! — then it's probably a good idea to check exactly what that person is up to, this quarter, down here on earth. The track record of humans promising that going along with their movement will get you immortality in heaven does not look all that great.
Good point, there is a paragraph I chose not to write about how insanely irresponsible this is. Driving people to now maximally invest/research AI for some insane future promise, while in reality ASI is basically guaranteed to kill them. Kind of like Heaven's Gate drinking poison to get into that spaceship that's waiting behind that comet.
The key intuition is not what the spoils are
But that social mobility goes to zero
Not really of course but kind of if you squint
Like any form of human ability or decision rapidly becomes worthless, and so it's just winners keep winning losers keep losing
An important part of the property story is that it smuggles in the assumption of intent-alignment to shareholders into the discussion. IE, the AI's original developers or the government executives that are running the project adjust the model spec in such a way that it alignment is "do what my owners want", where owners are anyone who owned a share in the AI company.
I find it somewhat plausible that we get intent alignment. [1] But I think the transmutation from "the board of directors/engineers who actually write the model spec are in control" to "voting rights over model values are distributed by stock ownership" is basically nonsense, because most of those shareholders will have no direct way to influence the AIs values during the takeoff period. What property rights do exist would be at the discretion of those influential executives, as well as managed by differences in hard power if there's a multipolar scenario (ex: US/Chinese division of the lightcone).
--
As a sidenote, Tim Underwood's The Accord is a well written look at what the literal consequences of locking in our contemporary property rights for the rest of time might look like.
It makes sense to expect the groups bankrolling AI development to prefer an AI that's aligned to their own interests, rather than humanity at large. On the other hand, it might be the case that intent alignment is harder/less robust than deontological alignment, at which point you'd expect most moral systems to forbid galactic-level inequality.
We can start selling galaxies now and hope that AI will care about these property rights.
I am here claim my property right on not yet discovered galaxy which center is located exactly 2 204 545 130 ly from Earth (or nearest satisfying this condition).
Note that already discovered galaxies can be regarded owned by the discoverers (eg James Webb telescope owners).
eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander
Dwarkesh is an interviewer; Leopold did a meme coup one time. I would like it if we avoided calling them 'eminent thinkers'. Their brand is 'thinker', but if we take the literal meaning of eminent, I basically don't think it's true that knowledgeable people respect either of them as public intellectuals.
Scott I'm more confused about.
JM Smith makes a convincing case that property rights arise from an evolutionary game theoretic strategy which outcompetes both pure Hawk and pure Dove in the Hawk-Dove game. You even allude to the pervasive phenomenon of territoriality in animals:
Dogs pee on trees to mark their territory; humans don't respect that.
Dogs do make marks and have surprisingly rigid and well defined territories. In as much as territorial rights to real estate constitute "property" it does not seem to depend on a human substrate (and rights to galaxies seem to fall within the conceptual boundaries of real estate territorial rights).
The internet famous map of wolf-pack territories derived from location tracking collars:
https://www.reddit.com/r/MapPorn/comments/cynexz/the_wolf_pack_map/
For those unfamiliar it shows a surprisingly strict respect for boundaries; colorful wild scribbling trails which halt at distinct boundaries and do not cross over into the neighboring color's scribbling trails.
Smith shows that a set of players which use a coordination mechanism to decide which plays Hawk and which plays Dove will out compete pure Hawk or pure Dove players. His example is temporal precedence; i.e. players decide who owns the prize by who got to it first, much like how seating territoriality manifests for humans in something like a cafeteria or airport lounge. This is the root of animal territoriality as well.
You can possibly make arguments that SAI will not use the same coordination mechanisms we currently use in terms of the entire extended legal apparati which express the various kinds of property rights in our society. However it does seem reasonable that property rights are preserved between asymmetrically powerful entities at present (like a middle class individual suing a large corporation) for similar reasons we should expect that property rights will continue: arbitrarily breaking the usual coordination methods for territoriality will harm the effectiveness of the coordination mechanism for less asymmetrical entity pairs; i.e. SAI stomping on human property rights may abrogate the system of property rights among SAI or, perhaps less catastrophically, coordinate other SAI on removing the property rights of the violator.
My main guess at why you're talking past each other is that you think it way more likely than them that ASI results in human extinction or some nefarious outcome. They think it's like 10% to 40% likely. Also they probably think this is going to be gradual enough for humans to augment and keep up with AIs cognitively. And, sure, many things can happen, included property rights losing meaning. But under this view it's not that crazy that property rights continue to be respected and enforced. Human norms will have a clear unbroken lineage.
It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander and many more. According to them, property rights will survive an AI singularity event and soon economic growth is going to make it possible for individuals to own entire galaxies in exchange for some AI stocks. It follows that we should now seriously think through how we can equally distribute those galaxies and make sure that most humans will not end up as the UBI underclass owning mere continents or major planets.
I don't think this is a particularly intelligent view. It comes from a huge lack of imagination for the future.
Property rights are weird, but humanity dying isn't
People may think that AI causing human extinction is something really strange and specific to happen. But it's the opposite: humans existing is a very brittle and strange state of affairs. Many specific things have to be true for us to be here, and when we build ASI there are many preferences and goals that would see us wiped out. It's actually hard to imagine any coherent preferences in an ASI that would keep humanity around in a recognizable form.
Property rights are an even more fragile layer on top of that. They're not facts about the universe that an AI must respect; they're entries in government databases that are already routinely ignored. It would be incredibly weird if human-derived property rights stuck around through a singularity.
Why property rights won't survive
Property rights are always held up by a level of violence and power, whether by the owner, some state, or some other organization. AI will overthrow our current system of power by being a much smarter and much more powerful entity than anything that preceded it.
Could you imagine, for example, that an AI CEO who somehow managed to align an AI to himself and his intents would step down if the board pointed out it legally had the right to remove him? The same would be true if the ASI was unaligned but the board presented the AI with some piece of paper that stated that the board controlled the ASI.
Or think about the incredibly rich but militarily inferior Aztec civilization. Why would the Spanish not just use their power advantage to simply take their gold? Venezuela, on some estimates, has the biggest oil reserves, but no significant military power. In other words, if you have a whole lot of property that you "own" but somebody else has much more power, you are probably going to lose it.
Property rights aren't enough
Even if we had property rights that an AI nominally respected, advanced AI could surely find some way to get you to sign away all your property in some legally binding way. Humans would be far too stupid to be even remotely equal trading partners. This illustrates why it would be absurd to trust a vastly superhuman AI to respect our notion of property and contracts.
What if there are many unaligned AIs?
One might think that if there are many AIs, they might have some interest in upholding each other's property rights. After all, countries benefit from international laws existing and others following them; it's often cheaper than war. So perhaps AIs would develop their own system of mutual recognition and property rights among themselves.
But none of that means they would have any interest in upholding human property rights. We wouldn't be parties to their agreements. Dogs pee on trees to mark their territory, humans have contracts; ASI will have something different.
Why would they be rewarded?
There's no reason to think that a well-aligned AI, one that genuinely has humanity's interests at heart, would preserve the arbitrary distribution of wealth that happened to exist at the moment of singularity.
So why do the people accelerating AI expect to be rewarded with galaxies? Without any solid argument for why property rights would be preserved, the outcome could just as easily be reversed, where the people accelerating AI end up with nothing, or worse.
Conclusion
I want to congratulate these people for understanding something of the scale of what's about to happen. But they haven't thought much further than that. They're imagining the current system, but bigger: shareholders becoming galactic landlords, the economy continuing but with more zeros.
That's not how this works. What's coming is something that totally wipes out all existing structures. The key intuition about the future might be simply that humans being around is an incredibly weird state of affairs. We shouldn't expect it to continue by default.