Gentzel's Comments

The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs

I think those two cases are pretty compatible. The simple rules seem to get formed due to the pressures created by large groups, but there are still smaller sub-groups within large groups than can benefit from getting around the inefficiency caused by the rules, so they coordinate to bend the rules.

Hanson also has an interesting post on group size and conformity:

In the vegan case, it is easier to explain things to a small number of people than a large number of people, even though it may still not be worth your time with small numbers of people. It's easier to hash out argument with one family member than to do something your entire family will impulsively think is hypocritical during Thanksgiving.

The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs

Yea, when you can copy the same value function across all the agents in an bureaucracy, you don't have to pay signaling costs to scale up. Alignment problems become more about access to information rather than having misaligned goals.

In Defense of the Arms Races… that End Arms Races

I think capacity races to deter deployment races are the best from this perspective: develop decisive capability advantages, and all sorts of useful technology, credibly signal that your could use it for coercive purposes and could deploy it at scale, then don't abuse the advantage, signal good intent, and just deploy useful non-coercive applications. The development and deployment process basically becomes an escalation ladder itself, where you can choose to stop at any point (though you still need to keep people employed/trained to sustain credibility).

I would probably rephase the "good guys" statement in terms of value alignment. For competitions where near term racing risks are low, long-term racing risks are high, the likely winner from starting race conditions earlier would be more value aligned with your goals, then racing may make sense. If initial race risks are high, or a less aligned actor is disproportionately likely to gain power, then pushing for more cooperative norms at the margin makes sense. You want to maximize your ability to foster or engineer risk reducing cooperation before the times of highest risk, and you want to avoid forms of cooperation that increase risk.

In Defense of the Arms Races… that End Arms Races

Basically, if even if there are adaptations that could happen to make an animal more resistant to venom, the incremental changes in their circulatory system required to do this are so maladaptive/harmful that they can't happen.

This is a pretty core part of competitive strategy: matching enduring strengths against the enduring weaknesses of competitors.

That said, races can shift dimensions too. Even though snake venom won the race against blood, gradual changes in the lethality of venom might still cause gradual adaptive changes in the behavior of some animals. A good criticism of competitive strategies between states, businesses, etc. is that the repeated shifts in competitive dimensions can still result in Molochian conditions/trading away utility for victory, which may have been preventable via regulation or agreement.

In Defense of the Arms Races… that End Arms Races

Thanks, some of Quester's other books on deterrence also seem pretty interesting books also seem interesting.

My post above was actually intended as a minor update to an old post from several years ago on my blog, so I didn't really expect it to be copied over to LessWrong. If I spent more time rewriting the post again, I think I would focus less on that case, which I think rightly can be contested from a number of directions, and talk more about conditions for race deterrence generally.

Basically, if you can credibly build up the capacity to win an arms race (with significant advantages in the relevant forms of talent, natural resources, industrial capacity, etc.) then you may not even have to race. Limited development could plausibly serve to make capacity credible, gain the advantages of positive externalities from cutting edge R&D, but avoid actually sinking a lot of the economy into the production of destabilizing systems. By showing extreme capability in a limited sense, and credible capability to win a particular race, you may be able to deter racing if the communication of lasting advantage is credible. If lasting advantage is not credible, you may get more of a Sputnik or AlphaGo type event and galvanize competitors toward racing faster.

For global tech competition more generally, it would be interesting to investigate industrial subsidies by competing governments to see in what conditions countries attempt strategic protectionism and to get around the WTO and in which cases they give up a sector of competition. My prior is that protectionism is more likely when an industry is established, and that countries which could have successfully entered a sector can be deterred from doing so.

In Defense of the Arms Races… that End Arms Races

While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.

Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella has been good for reducing the number of possible security dilemmas between states and accordingly people are better off than they would otherwise be with more independent military forces (higher defense spending, higher war risk, etc.). There is some degree of specialization within NATO which makes it harder for states to go to war as individuals, and also makes their contribution to the alliance more vital. The more this happens at a given resource level, the more powerful the alliance will be in absolute terms, and the more power will be internally balanced against unilateral actions that conflict with some state's interests, though at some point veto power and reduced redundancy could undermine the strength of the alliance.

For technological risks, racing increases risk in the short-run between the competitors but will tend to reduce the number of competitors. In the long-run, agreeing not to race while other technologies progress increases the amount of low hanging fruit and expands the scope of competition to more possible competitors. If you think resource-commandeering positive feedback loops are not super close, there might be a degree of racing you would want earlier to establish front-runners to win and deter potential market entrants from expanding the competition during a period of high-risk low-hanging fruit. You might be able to do better yet if the near term leading competitors can reach agreement to not race, and then team up to defeat or buyout new entrants. The leaders obviously can't hold everything completely still and expect to remain leaders though, and businesses should deliver measurable tech progress if they want to avoid anti-monopoly regulation.

Anyway, basically preventing races isn't as simple as choosing not to race, and even if your goal is just to minimize risk you either have to credibly commit a larger and large number of actors to not defect over time as technology and know-how diffuses, or you should want more aligned competitors to win and to cooperate to slow the risky aspects of racing.

Apologies if this wasn't clear from the post, the post was intended as a minor update to one I wrote several years ago, and I didn't expect to see it get copied over to LessWrong, haha.

The Cybersecurity Dilemma in a Nutshell

The book does assume from the start that states want offensive options. I guess it is useful to breakdown the motivations of offensive capabilities. Though the motivations aren’t fully distinct, it matters if a state is intruding as the prelude to or an opening round of a conflict, or if it is just trying to improve its ability to defend itself without necessarily trying to disrupt anything in the network being intruded into. There are totally different motives too, like North Korea installing cryptocurrency miners on other countries’ computers, but I guess you could analogize that to taxing territory from a foreign state without engaging its military.

The book basically argues that even if cybersecurity is your goal, a more cost-effective defense will almost always involve making intrusions for defensive purposes since it becomes prohibitively expensive to protect everything when the attacker can choose anywhere to strike.

I could see an argument that very small actors would do better to focus purely on defenses, since if their networks are small enough, it may be easier to map them and to protect everything extremely well, while it could require more talent to make useful intrusions into other networks. The larger an actor is, (like a state) the more complex its systems are, and the harder to centrally control and monitor those systems are, presumably the more effective going on the offensive becomes to counter intruders. I think states do make this calculation, and that's why they often also have smaller air-gapped systems that are easier to defend.

For defending the public though, it would be a nightmare to individually intervene in millions of online businesses, just as it would be a nightmare if the government had to post guards outside every business to prevent intrusion by foreign soldiers. When the landscape is like that: far more vulnerabilities than adversaries, potential adversaries are a rational point of focus.

Strategic High Skill Immigration

I suspect high skill immigration directly helps probably with other risks more than with AI due to the potential ease of espionage with software (though some huge data sets are impractical to steal). However, as risks from AI are likely more immanent, most of the net benefit will likely be concentrated with reductions in risk there, provided such changes are done carefully.

As for brain drain, it seems to be a net economic benefit to both sides, even if one side gets further ahead in the strategic sense:

Basically, smart people go places where they earn more, and send back larger remitances. Some plausibly good effect on home country institutions too:,_human_rights_and_liberal_values

Strategic High Skill Immigration

We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won't be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.

It's also true that states aren't unified rational actors, so this sort of analysis is more of a course grained description of what happens over time: in the long run, the most competitive systems win, but in the short run smaller coalition dynamics might prevent larger states from exploiting their position of advantage to the maximal degree.

As for happiness, autonomy doesn't require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).

Strategic High Skill Immigration

The most up to date version of this post can be found here:

Load More