[This is the transcript of a chat conversation I had with another member of my local rationalist meet-up, on the topics of Moloch, world government, and colonization. Lightly edited for clarity, spelling, etc. and shared with their permission. Cross-posted from Grand, Unified, Empty.]

Me: Here are some thoughts on Moloch. Moloch basically guarantees that anybody who can figure out how to successfully convert other values into economic value will out-compete the rest. So in the end, we are the paperclip maximizers, except our paperclips are dollar bills.

Scott proposes that to defeat Moloch we install a gardener, specifically a super-intelligent AI. But if you don’t think that’s going to happen, a world government seems like the next best thing. However if we escape earth before that happens, speed of light limitations will forever fragment us into competing factions impossible to garden. Therefore we should forbid any attempts to colonize Mars or other planets until we have world government and the technology to effectively manage such colonies under that government.

Them: The superorganisms in his parable only function because of… external competitive pressures. If cells didn’t need to band together to survive, they wouldn’t. If governments don’t have to fend off foreign governments they will accumulate corruption and dysfunctions.

Sort of related, I’m not persuaded by the conclusion to his parable. Won’t superintelligent AIs be subject to the same natural selective pressures as any other entity? What happens when our benevolent gardener encounters the expanding sphere of computronium from five galaxies over?

Me: Cells were surviving just fine without banding together. It was just that cells which banded together reproduced and consumed resources more effectively than those which didn’t. Similarly, I think a well constructed world government could survive just fine without competitive pressure. We haven’t necessarily found the form of that government yet, but liberal democracy seems like a decent first step.

Regarding competitive pressure on AI, he deals with that off hand by assuming that accelerating self-improvement gives an unbreakable first mover advantage. I don’t think that’s actually true, but then I’m much less bullish on super-intelligent AI in general.

Them: It would “survive,” but we don’t want a surviving government, we want a competent, benevolent one. My read on large organizations in general is that they naturally tend towards dysfunction, and it’s only competitive pressures that keep them functional.

Me: That produces a dismal view of the universe. We are given a Sophie’s Choice of either tiling the universe in economicium in order to compete and survive, or instantiating a global gardener which inherently tends towards dystopic dysfunction.

My read on large organizations in general is that they naturally tend towards dysfunction, and it’s only competitive pressures that keep them functional.

This is certainly mostly true, but I’m not yet convinced it’s necessarily true.

competitive pressures

I think this in particular is too narrow. Hunter-gatherer bands were organizations that stayed relatively “functional”, often not due to competitive pressures with other bands, but due to pure environmental survival pressures. We probably don’t want a government that stays functional due to environmental survival pressures either, but I’m generalizing to an intuition that there are other kinds of pressure.

Them: There are other kinds of pressure, but you better be damn sure you’ve got them figured out before you quash all rivals.

Me: 100%

Them: And to be precise, yeah, there’s a second thing keeping organizations intact, and that’s the floor imposed by “so incompetent they self-destruct.” But I think they degrade to the level of the floor, at which point they are no longer robust enough to survive two crises taking place at once, so they collapse anyway.

Me: Hmm, so it becomes impossible to instantiate a long-term stable gardener of any kind, and we’re stuck tiling the universe in economicium regardless.

Them: Well I think it might be possible (in the short term at least), but you have to be cognizant of the risks before you assume removing competition will make things better. So when I imagine a one-world-government, it’s more like a coordinating body above a collection of smaller states locked in fierce competition (hopefully just economic, cultural & athletic).

Me: At the risk of clarifying something which is already clear: I was never arguing that we are ready for world government now, or should work towards that soon; I was just saying there are some things we shouldn’t do until we have a good world government. We should make sure we can garden what we have before we go buying more land.

Them: Hmm, okay, I think that’s some important nuance I was overlooking.

Me: Though perhaps that is an inherently useless suggestion, since the coordination required to not buy more land is… a global gardener. Otherwise there’s competitive advantage in getting to more land first.

Them: So its a fair point. I assume that any pan-global body will not be well-designed, since it won’t be subject to competitive pressures. But its true that you might want to solve that problem before you start propagating your social structures through the universe.

Me: I’m now imagining the parallel argument playing out in Europe just post-Columbus. “We shouldn’t colonize North America until we have a well-gardened Europe”. That highlights the absurdity of it rather well.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 6:34 AM

"We shouldn't colonize mars until we have a world government"

But it would take a world government to be able to enact and enforce "don't colonize mars" worldwide.

 

On the other hand, if an AI Gardner was hard, but not impossible, and we only managed to make one after we had a thriving interstellar empire, then it could still stop the decent into malthusianism.

However if we escape earth before that happens, speed of light limitations will forever fragment us into competing factions impossible to garden.

If we escape earth before ASI, the ASI will still be able to garden the fragments.

Sort of related, I’m not persuaded by the conclusion to his parable. Won’t superintelligent AIs be subject to the same natural selective pressures as any other entity? What happens when our benevolent gardener encounters the expanding sphere of computronium from five galaxies over?

Firstly, if there is a singleton AI, it can use lots of error correction on itself. There is exactly one version of it, and it is far more powerful than anything else around. Whatsmore, the AI is well aware of these sorts of phenomena, and will move to squash any tiny traces of molochyness that it spots.

If humans made multiple AI's, then there is a potential for conflict. However, the AI's are motivated to avoid conflict. They would prefer to merge their resources into a single AI with a combined utility function, but they would prefer to pull a fast one on the other AI even more. I suspect that a fraction of a percent of available resources is spent on double checking and monitoring systems. The rest goes into an average of the utility functions.

If alien AI's meet humanities  AI's, then either we get the value merging, or it turns out to be a lot harder to attack a star system than to defend it, so we get whichever stars we can reach first.

Is "world government" an explicit metaphor for alignment and empathy across more of humanity, or a literal (and wrong) belief that citizenship papers control allegiance of individuals?

The problematic competition (also, the necessary creative competition) is at least as visible in non-governmental organizations as in governments, so I don't buy the unstated premise that government is the right measure of unity for the entire thesis.

[-][anonymous]4y70

Neither. Governments are effectively defined by wielding monopolistic force, not by citizenship papers.

Monopolistic force isn't enough. To be able to enforce, you need to be able to detect the wrongdoers. You need to be able to provide sufficient punishment to motivate people into obedience. Even then, you will still get the odd crazy person breaking the rules.

Some potential rules, like "don't specification game this metric" are practically unenforceable. The soviets didn't manage to make the number of goods on paper equal the amount in reality. It was too hard for the rulers to detect every possible trick that could make the numbers on paper go up.

You both seem to be assuming that competitive pressures from other governments is what causes current governments to be stable. However, that seems pretty unlikely to me. I doubt the US government would be significantly different, even if there was no ability for other governments to compete with the US at all (eg. no migration, no trade, no military). After all, how does the US government currently compete? Obviously with the military, but the US government isn't becoming less corrupt to avoid being out competed with the military. Aside from that, migration seems to be the main way, and if anything the US government attempts to be less fit in that regard.

The forces that keep it stable are rather entirely internal. Similarly, a world government would be kept stable through the forces of politics - presumably some form of democracy.

[-][anonymous]4y10
You both seem to be assuming that competitive pressures from other governments is what causes current governments to be stable

Current, yes, but I was explicit that I don't think this is a universal truth. As I wrote, "I think a well constructed world government could survive just fine without competitive pressure".

I definitely think it's true of your example though. If the US government was completely isolated, then the individual states would have a much-reduced incentive to participate in that government, and a much-increased incentive to defect and try and grab more of the pie for themselves. I suspect in the long term of that scenario, the US would dissolve into a collection of smaller competing states.

I think it's misleading to say the competitive pressures themselves cause stability. It's more that they provide the incentive to coordinate effectively, which is what causes stability.