All of Gentzel's Comments + Replies

My model of the problem boils down to a few basic factors:

  1. Attention competition prompts speed and rewards some degree of imprecision and controversy with more engagement.
  2. It is difficult to comply with many costly norms and to have significant output/win attention competitions.
  3. There is debate over which norms should be enforced, and while getting the norms combination right is positive-sum overall, different norms favor different personalities in competition.
  4. Just purging the norm breakers can create substantial groupthink if the norm breakers dispropor
... (read more)

I am not sure that is actually true. There are many escalatory situations, border clashes, and mini-conflicts that could easily lead to far larger scale war, but don't due to the rules and norms that military forces impose on themselves and that lead to de-escalation. Once there is broader conflict though between large organizations, then yes you often do often need a treaty to end it.

Treaties don't work on decentralized insurgencies though and hence forever wars: agreements can't be credibly enforced when each fighter has their own incentives and veto power. This is an area where norm spread can be helpful, and I do think online discourse is currently far more like waring groups of insurgents than waring armies.

Why would multi-party conflict change the utility of the rules? It does change the ease of enforcement, but that's the reason to start small and scale until the advantages of cooperating exceed the advantages of defecting. That how lots of good things develop where cooperation is hard.

The dominance of in-group competition seems like the sort of thing that is true until it isn't. Group selection is sometimes slow, but that doesn't mean it doesn't exist. Monopolies have internal competition problems, while companies on a competitive market do get forced to d... (read more)

4ChristianKl2y
If you look at facebook you see a lot of in-group competition over likes, comments and generally getting attention. Dating markets are generally about in-group competition and dating is something people care about a lot. Promotion decisions are about in-group competition. While it's true that companies in competive markets have more pressure to reduce internal zero-sum competition we still have moral mazes in a lot of the large and powerful cooperations. On the other hand of the spectrum you have for example Tucker Max's PR team that wants to promote a movie and feminists who want signal their values by demonstrating. Then you have them working together to make a demonstration against the film of Tucker Max. The alliance is more conscious on the side of the PR team of Tucker Max then on the other side both "sides" are making profits and playing a game that's positive sum for them while negative sum for a lot of other people in society.

I don't think you are fully getting what I am saying, though that's understandable because I haven't added any info on what makes a valid enemy.

I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.

Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can... (read more)

2Dagon2y
I think if you frame it as "every transaction and relationship has elements of cooperation and competition, so every communication has a need for truth and deception.", and then explore the specific types of trust and conflict, and how they impact the dimensions of communication, we'd be in excellent-post territory. The bounds of understanding in humans mean that we simply don't know the right balance of cooperation and competition.  So we have, at best, some wild guesses as to what's collateral damage vs what's productive advantage over our opponents.  I'd argue that there's an amazing amount of self-deception in humans, and I take a Schelling Fence approach to that - I don't understand the protection and benefit to others' self-deception and maintained internal inconsistency, so I hesitate to unilaterally decry it.  In myself, I strive to keep self-talk and internal models as accurate as possible, and that includes permission to lie without hesitation when I think it's to my advantage.

That's totally fair for LessWrong, haha. I should probably try to reset things so my blog doesn't automatically post here except when I want it to.

I agree with this line of analysis. Some points I would add:

-Authoritarian closed societies probably have an advantage at covert racing, at devoting a larger proportion of their economic pie to racing suddenly, and at artificially lowering prices to do so. Open societies have probably a greater advantage at discovery/the cutting edge and have a bigger pie in the first place (though better private sector opportunities compete up the cost of defense engineering talent). Given this structure, I think you want the open societies to keep their tech advantage, a... (read more)

2MichaelA3y
These are interesting points which I hadn't considered - thanks! (Your other point also seems interesting and plausible, but I feel I lack the relevant knowledge to immediately evaluate it well myself.)

Some of the original papers on nuclear winter reference this effect, e.g. in the abstract here about high yield surface burst weapons (e.g. I think this would include the sort that would have been targeted at silos by the USSR). https://science.sciencemag.org/content/222/4630/1283

A common problem with some modern papers is that they just take soot/dust amounts from these prior papers without adjusting for arsenal changes or changes in fire modeling.

This is what the non-proliferation treaty is for. Smaller countries could already do this if they want, as they aren't treaty limited in terms of the number of weapons they make, but getting themselves down the cost curve wouldn't make export profitable or desirable because they have to eat the cost of going down the cost curve in the first place and no one that would only buy cheap nukes is going to compensate them for this. Depending on how much data North Korea got from prior tests, they might still require a lot more testing, and they certainly require... (read more)

I generally agree with this thought train of concern. That said, if the end state equilibrium is large states have counterforce arsenals and only small states have multi-megaton weapons, then I think that equilibrium is safer in terms of expected death because the odds of nuclear winter are so much lower.

There will be risk adaptation either way. The risk of nuclear war may go up contingent on their being a war, but the risk of war may go down because there are lower odds of being able to keep war purely conventional. I think that makes assessing the net ri... (read more)

Precision isn't cheap. Low yield accurate weapons will often be harder to make than large yield inaccurate weapons. A rich country might descend the cost curve in production, but as long the U.S. stays in an umbrella deterrence paradigm that doesn't decrease costs for anyone else, because we don't export nukes.

This also increases the cost for rogue states to defend their arsenals (because they are small, don't have a lot of area to hide stuff, etc.), which may discourage them from gaining them in the first place.

1cistran3y
USA is not the only nuclear power. Other nuclear powers which begin to descend their cost curves might be tempted to export the cheaper tech, especially if the expensive precision components are not wanted by the buyer. See the nuclear tech connection between Pakistan and North Korea, but make the cost of technology an order of magnitude smaller.  Limiting the spread of cheap nuclear weapons will never become as impossible as banning firearms, but it will become harder.

I meant A. The Beirut explosion was about the yield of a mini-nuke.

2ESRogs3y
Thanks!

I could imagine unilateral action to reduce risk here being good, but not in violation of current arms control agreements. To do that without breaking any current agreements, that means replacing lots of warheads with lower yields or dial yields, and probably getting more conventional long-range precision weapons. Trying to replace some sub-launhed missiles with low yield warheads was a step in that direction.

There's a trade-off between holding leverage to negotiate, and just directly moving to a better equilibrium, but if you are the U.S., the strategy sh... (read more)

I think you need legible rules for norms to scale in an adversarial game, so it can't be direct utility threshold based rules.

Proportionality is harder to make legible, but when lies are directed at political allies that's clear friendly fire or betrayl. Lying to the general public also shouldn't fly, that's indiscriminate.

I really don't think lying and censorship is going to help with climate change. We already have publication bias and hype on one side, and corporate lobbying + other lies on the other. You probably have to take another approach to get tr... (read more)

1supposedlyfun3y
Fair points.  Upon reflection, I would probably want to know in advance that the Dark Arts intervention was going to work before authorizing it, and we're not going to get that level of certainty short of an FAI anyway, so maybe it's a moot point.

It's not an antidote, just like a blockade isn't an antidote to war. Blockades might happen to prevent a war or be engineered for good effects, but by default they are distortionary in a negative direction, have collateral damage, and can only be pulled off by the powerful.

While it can depend on the specifics, in general censorship is coercive and one sided. Just asking someone to not share something isn't censorship, things are more censorial if there is a threat attached.

I don't think it is bad to only agree to share a secret with someone if they agree to keep the secret. The info wouldn't have been shared in the first place otherwise. If a friend gives you something in confidence, and you go public with the info, you are treating that friend as an adversary at least to some degree, so being more demanding in response to a threat is proportional.

2ChristianKl3y
While there's no explicit threat attached to most asks for keeping information secret rejecting those demands can still incur costs.  In a work enviroment if you violate the peoples demands for keeping information secret you might lose chances of being promoted even if that Most censorship in China is not through explicit threats but through expectations that would will not publish certain information.

Vouchers could be in the range of competition, but if people prefer basic income to the value they can get via voucher at the same cost-level then there has to best substantial value that the individual doesn't capture to justify it. School vouchers may be a case of this, since education has broader societal value.

Issue there is that implicitly U.S. R&D is subsidizing the rest of the world since we don't negotiate prices but others do. Seems like an unfortunate trade-off between the present and the future/ here and other places, except when there is a lack of reinvestment of revenue into R&D.

1TAG3y
I made the specific point about drug costs to support the general point that that things don't have to become more expensive when purchased by governments.

I agree with your point in general. In these cases, I'm specifically focusing on regulations for issues that evaporate with central coordination: 

- Government is doing the central coordinating, so overriding zoning shouldn't result in uncoordinated planning: gov will also incur the related infrastructure costs.
- If you relax zoning and room size minimums everywhere, the minimum cost to live everywhere decreases, so no particular spot becomes disproportionately vulnerable to concentrating the negative externalities of poverty while simultaneously you decrease housing cost based poverty everywhere.

I think I agree with the time-restricted fund idea on competitive commodities as being better than just providing the commodities since there aren't going to be a lot of further economy of scale benefits. 

Having competing services and basic income coming out of the same gov budget does create pressure to not make things as poorly as past gov programs. The incentives should still be aligned, because people can still choose to opt out just like the normal market. 

On food, the outcomes shouldn't be as bad as food stamp restrictions over time not jus... (read more)

I agree that it is a huge problem if the rules can change in a manner that evaporates the fitness pressure on the services: you need some sort of pegging to stop budgets from exploding, you can't have gov outlawing competition, etc. 

I also don't have a strong opinion on how flexible the government should be here. The more flexible it is, the less benefit you get from constraining variance and achieving economies of scale, the more flexible it is, the more people can get exactly what they want, but with less buying power. I do think it is helpful to ha... (read more)

I think this is a good argument in general, but idk how it does against this particular set-up.

When spending levels are pegged, and you are starting out with a budget scope similar to current social programs, some particular company or bureaucracy is only going to capture a whole market if they do really good job since: A: people can opt-out for cash, B: people can chose different services within the system, and C: people can spend their own income on whatever they want outside the universal system.

As long as you sustain fitness pressure on the services, a... (read more)

The goal is to standardize a floor, not to chop up the ceiling. People would be free to buy whatever they want if they opt out. Those that opt-in benefit from central coordination with others to solve the adverse selection problem with housing that incentivizes each local area to regulate things bigger and bigger than people need to keep away poor people, making housing more expensive everywhere than it needs to be and curtailing any innovation in the direction of making housing smaller. It probably isn't a coincidence that Japan has capsule hotels, better... (read more)

I think those two cases are pretty compatible. The simple rules seem to get formed due to the pressures created by large groups, but there are still smaller sub-groups within large groups than can benefit from getting around the inefficiency caused by the rules, so they coordinate to bend the rules.

Hanson also has an interesting post on group size and conformity: http://www.overcomingbias.com/2010/10/towns-norm-best.html

In the vegan case, it is easier to explain things to a small number of people than a large number of people, even though it may still n... (read more)

Yea, when you can copy the same value function across all the agents in an bureaucracy, you don't have to pay signaling costs to scale up. Alignment problems become more about access to information rather than having misaligned goals.

I think capacity races to deter deployment races are the best from this perspective: develop decisive capability advantages, and all sorts of useful technology, credibly signal that your could use it for coercive purposes and could deploy it at scale, then don't abuse the advantage, signal good intent, and just deploy useful non-coercive applications. The development and deployment process basically becomes an escalation ladder itself, where you can choose to stop at any point (though you still need to keep people employed/trained to sustain credibili... (read more)

Basically, if even if there are adaptations that could happen to make an animal more resistant to venom, the incremental changes in their circulatory system required to do this are so maladaptive/harmful that they can't happen.

This is a pretty core part of competitive strategy: matching enduring strengths against the enduring weaknesses of competitors.

That said, races can shift dimensions too. Even though snake venom won the race against blood, gradual changes in the lethality of venom might still cause gradual adaptive changes in the behavior of so... (read more)

Thanks, some of Quester's other books on deterrence also seem pretty interesting books also seem interesting.

My post above was actually intended as a minor update to an old post from several years ago on my blog, so I didn't really expect it to be copied over to LessWrong. If I spent more time rewriting the post again, I think I would focus less on that case, which I think rightly can be contested from a number of directions, and talk more about conditions for race deterrence generally.

Basically, if you can credibly build up the capacity to win... (read more)

While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.

https://www.amazon.com/Narrow-Corridor-States-Societies-Liberty-ebook/dp/B07MCRLV2K

Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella... (read more)

The book does assume from the start that states want offensive options. I guess it is useful to breakdown the motivations of offensive capabilities. Though the motivations aren’t fully distinct, it matters if a state is intruding as the prelude to or an opening round of a conflict, or if it is just trying to improve its ability to defend itself without necessarily trying to disrupt anything in the network being intruded into. There are totally different motives too, like North Korea installing cryptocurrency miners on other countries’ comput... (read more)

2ChristianKl4y
If you are a large country like the US. You don't need to intervene manually in millions of online businesses. On a policy-level you need to setup legal liability for people who are uses practices that put users at risk. Equifax should be liable in a way that bankrupts the company for what they did. As a result of Julia Reda's work the EU recently decided to pay for bug bounties for important open source projects that are used a lot in it's infrastructure. We should move to a world where we don't have bufferoverflows due to the problems of C and use more safe language like Rust for the lower part of our techstack. To the extend we have dependencies that are twenty year old vulnerable C code the government should take a few billion into it's hand to get them rewritten in Rust when it's widely used open-source code or force companies through liability for breaches to rewrite their own closed source stuff.

I suspect high skill immigration directly helps probably with other risks more than with AI due to the potential ease of espionage with software (though some huge data sets are impractical to steal). However, as risks from AI are likely more immanent, most of the net benefit will likely be concentrated with reductions in risk there, provided such changes are done carefully.

As for brain drain, it seems to be a net economic benefit to both sides, even if one side gets further ahead in the strategic sense: https://en.wikipedia.org/wiki/Human_capital_flight

Ba... (read more)

We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won't be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.

It's also true that states aren't unified rational actors, so this sort of analysis is more of a course grained descriptio... (read more)

2whpearson6y
Then you have an alignment problem. AIs should be making decisions consistent with human values. If AIs are making the world a worse place just by their existence, then SOMETHING HAS GONE VERY WRONG. That is a truism. In evolutionary history, competitive does not mean necessarily mean the biggest or even smartest though. I may be wrong, but I expect a system that needs to maintain power without an external threat to be a lot more unforgiving on autonomy. It seems that every action that might lead to an increased chance of the sovreign losing DSA would have to be forbidden and cracked down upon. With a multipolar situation you don't rock the boat too much because your country, that you like, might lose to another. Also with a sovreign I see no chance of fixing any abuse. In a multi-polar situation (especially if we go with merge with AI route), future people can choose to support less abusive power structures.

The most up to date version of this post can be found here: https://theconsequentialist.wordpress.com/2017/12/05/strategic-high-skill-immigration/

At times the signal house was densely populated and a bunch of people got sick. These problems went away over time as some moved out, and we standardized better health practices (hand sanitizer freely available, people spreading out or working from their rooms if sick, etc).

I think it is better to assess personal fit for the bootcamp. There are a lot of advantages I think you can get from the program that would be difficult to acquire quickly on your own.

Aside from lectures, a lot of the program was self study, including a lot of my most productive time at the bootcamp, but there was normally the option to get help, and it was this help, advice, and strategy that I think made the program far more productive than what I would have done on my own, or in another bootcamp for that matter (I am under the impression longer bootca... (read more)

3Fluttershy7y
Yes, this is correct. You're good at socializing and very pleasant to be around, and didn't generally had problems finding pair programming partners when you wanted to work with someone. I'm shy, and couldn't even find anyone who wanted to pair program with me most days, even though I was generally interested in working with others, and often asked Jonah or other students if anyone wanted to work together.