I am not sure that is actually true. There are many escalatory situations, border clashes, and mini-conflicts that could easily lead to far larger scale war, but don't due to the rules and norms that military forces impose on themselves and that lead to de-escalation. Once there is broader conflict though between large organizations, then yes you often do often need a treaty to end it.
Treaties don't work on decentralized insurgencies though and hence forever wars: agreements can't be credibly enforced when each fighter has their own incentives and veto power. This is an area where norm spread can be helpful, and I do think online discourse is currently far more like waring groups of insurgents than waring armies.
Why would multi-party conflict change the utility of the rules? It does change the ease of enforcement, but that's the reason to start small and scale until the advantages of cooperating exceed the advantages of defecting. That how lots of good things develop where cooperation is hard.
The dominance of in-group competition seems like the sort of thing that is true until it isn't. Group selection is sometimes slow, but that doesn't mean it doesn't exist. Monopolies have internal competition problems, while companies on a competitive market do get forced to d...
I don't think you are fully getting what I am saying, though that's understandable because I haven't added any info on what makes a valid enemy.
I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.
Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can...
That's totally fair for LessWrong, haha. I should probably try to reset things so my blog doesn't automatically post here except when I want it to.
I agree with this line of analysis. Some points I would add:
-Authoritarian closed societies probably have an advantage at covert racing, at devoting a larger proportion of their economic pie to racing suddenly, and at artificially lowering prices to do so. Open societies have probably a greater advantage at discovery/the cutting edge and have a bigger pie in the first place (though better private sector opportunities compete up the cost of defense engineering talent). Given this structure, I think you want the open societies to keep their tech advantage, a...
Some of the original papers on nuclear winter reference this effect, e.g. in the abstract here about high yield surface burst weapons (e.g. I think this would include the sort that would have been targeted at silos by the USSR). https://science.sciencemag.org/content/222/4630/1283
A common problem with some modern papers is that they just take soot/dust amounts from these prior papers without adjusting for arsenal changes or changes in fire modeling.
This is what the non-proliferation treaty is for. Smaller countries could already do this if they want, as they aren't treaty limited in terms of the number of weapons they make, but getting themselves down the cost curve wouldn't make export profitable or desirable because they have to eat the cost of going down the cost curve in the first place and no one that would only buy cheap nukes is going to compensate them for this. Depending on how much data North Korea got from prior tests, they might still require a lot more testing, and they certainly require...
I generally agree with this thought train of concern. That said, if the end state equilibrium is large states have counterforce arsenals and only small states have multi-megaton weapons, then I think that equilibrium is safer in terms of expected death because the odds of nuclear winter are so much lower.
There will be risk adaptation either way. The risk of nuclear war may go up contingent on their being a war, but the risk of war may go down because there are lower odds of being able to keep war purely conventional. I think that makes assessing the net ri...
Precision isn't cheap. Low yield accurate weapons will often be harder to make than large yield inaccurate weapons. A rich country might descend the cost curve in production, but as long the U.S. stays in an umbrella deterrence paradigm that doesn't decrease costs for anyone else, because we don't export nukes.
This also increases the cost for rogue states to defend their arsenals (because they are small, don't have a lot of area to hide stuff, etc.), which may discourage them from gaining them in the first place.
I could imagine unilateral action to reduce risk here being good, but not in violation of current arms control agreements. To do that without breaking any current agreements, that means replacing lots of warheads with lower yields or dial yields, and probably getting more conventional long-range precision weapons. Trying to replace some sub-launhed missiles with low yield warheads was a step in that direction.
There's a trade-off between holding leverage to negotiate, and just directly moving to a better equilibrium, but if you are the U.S., the strategy sh...
I think you need legible rules for norms to scale in an adversarial game, so it can't be direct utility threshold based rules.
Proportionality is harder to make legible, but when lies are directed at political allies that's clear friendly fire or betrayl. Lying to the general public also shouldn't fly, that's indiscriminate.
I really don't think lying and censorship is going to help with climate change. We already have publication bias and hype on one side, and corporate lobbying + other lies on the other. You probably have to take another approach to get tr...
It's not an antidote, just like a blockade isn't an antidote to war. Blockades might happen to prevent a war or be engineered for good effects, but by default they are distortionary in a negative direction, have collateral damage, and can only be pulled off by the powerful.
While it can depend on the specifics, in general censorship is coercive and one sided. Just asking someone to not share something isn't censorship, things are more censorial if there is a threat attached.
I don't think it is bad to only agree to share a secret with someone if they agree to keep the secret. The info wouldn't have been shared in the first place otherwise. If a friend gives you something in confidence, and you go public with the info, you are treating that friend as an adversary at least to some degree, so being more demanding in response to a threat is proportional.
Vouchers could be in the range of competition, but if people prefer basic income to the value they can get via voucher at the same cost-level then there has to best substantial value that the individual doesn't capture to justify it. School vouchers may be a case of this, since education has broader societal value.
Issue there is that implicitly U.S. R&D is subsidizing the rest of the world since we don't negotiate prices but others do. Seems like an unfortunate trade-off between the present and the future/ here and other places, except when there is a lack of reinvestment of revenue into R&D.
I agree with your point in general. In these cases, I'm specifically focusing on regulations for issues that evaporate with central coordination:
- Government is doing the central coordinating, so overriding zoning shouldn't result in uncoordinated planning: gov will also incur the related infrastructure costs.
- If you relax zoning and room size minimums everywhere, the minimum cost to live everywhere decreases, so no particular spot becomes disproportionately vulnerable to concentrating the negative externalities of poverty while simultaneously you decrease housing cost based poverty everywhere.
I think I agree with the time-restricted fund idea on competitive commodities as being better than just providing the commodities since there aren't going to be a lot of further economy of scale benefits.
Having competing services and basic income coming out of the same gov budget does create pressure to not make things as poorly as past gov programs. The incentives should still be aligned, because people can still choose to opt out just like the normal market.
On food, the outcomes shouldn't be as bad as food stamp restrictions over time not jus...
I agree that it is a huge problem if the rules can change in a manner that evaporates the fitness pressure on the services: you need some sort of pegging to stop budgets from exploding, you can't have gov outlawing competition, etc.
I also don't have a strong opinion on how flexible the government should be here. The more flexible it is, the less benefit you get from constraining variance and achieving economies of scale, the more flexible it is, the more people can get exactly what they want, but with less buying power. I do think it is helpful to ha...
I think this is a good argument in general, but idk how it does against this particular set-up.
When spending levels are pegged, and you are starting out with a budget scope similar to current social programs, some particular company or bureaucracy is only going to capture a whole market if they do really good job since: A: people can opt-out for cash, B: people can chose different services within the system, and C: people can spend their own income on whatever they want outside the universal system.
As long as you sustain fitness pressure on the services, a...
The goal is to standardize a floor, not to chop up the ceiling. People would be free to buy whatever they want if they opt out. Those that opt-in benefit from central coordination with others to solve the adverse selection problem with housing that incentivizes each local area to regulate things bigger and bigger than people need to keep away poor people, making housing more expensive everywhere than it needs to be and curtailing any innovation in the direction of making housing smaller. It probably isn't a coincidence that Japan has capsule hotels, better...
I think those two cases are pretty compatible. The simple rules seem to get formed due to the pressures created by large groups, but there are still smaller sub-groups within large groups than can benefit from getting around the inefficiency caused by the rules, so they coordinate to bend the rules.
Hanson also has an interesting post on group size and conformity: http://www.overcomingbias.com/2010/10/towns-norm-best.html
In the vegan case, it is easier to explain things to a small number of people than a large number of people, even though it may still n...
Yea, when you can copy the same value function across all the agents in an bureaucracy, you don't have to pay signaling costs to scale up. Alignment problems become more about access to information rather than having misaligned goals.
I think capacity races to deter deployment races are the best from this perspective: develop decisive capability advantages, and all sorts of useful technology, credibly signal that your could use it for coercive purposes and could deploy it at scale, then don't abuse the advantage, signal good intent, and just deploy useful non-coercive applications. The development and deployment process basically becomes an escalation ladder itself, where you can choose to stop at any point (though you still need to keep people employed/trained to sustain credibili...
Basically, if even if there are adaptations that could happen to make an animal more resistant to venom, the incremental changes in their circulatory system required to do this are so maladaptive/harmful that they can't happen.
This is a pretty core part of competitive strategy: matching enduring strengths against the enduring weaknesses of competitors.
That said, races can shift dimensions too. Even though snake venom won the race against blood, gradual changes in the lethality of venom might still cause gradual adaptive changes in the behavior of so...
Thanks, some of Quester's other books on deterrence also seem pretty interesting books also seem interesting.
My post above was actually intended as a minor update to an old post from several years ago on my blog, so I didn't really expect it to be copied over to LessWrong. If I spent more time rewriting the post again, I think I would focus less on that case, which I think rightly can be contested from a number of directions, and talk more about conditions for race deterrence generally.
Basically, if you can credibly build up the capacity to win...
While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.
https://www.amazon.com/Narrow-Corridor-States-Societies-Liberty-ebook/dp/B07MCRLV2K
Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella...
The book does assume from the start that states want offensive options. I guess it is useful to breakdown the motivations of offensive capabilities. Though the motivations aren’t fully distinct, it matters if a state is intruding as the prelude to or an opening round of a conflict, or if it is just trying to improve its ability to defend itself without necessarily trying to disrupt anything in the network being intruded into. There are totally different motives too, like North Korea installing cryptocurrency miners on other countries’ comput...
I suspect high skill immigration directly helps probably with other risks more than with AI due to the potential ease of espionage with software (though some huge data sets are impractical to steal). However, as risks from AI are likely more immanent, most of the net benefit will likely be concentrated with reductions in risk there, provided such changes are done carefully.
As for brain drain, it seems to be a net economic benefit to both sides, even if one side gets further ahead in the strategic sense: https://en.wikipedia.org/wiki/Human_capital_flight
Ba...
We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won't be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.
It's also true that states aren't unified rational actors, so this sort of analysis is more of a course grained descriptio...
The most up to date version of this post can be found here: https://theconsequentialist.wordpress.com/2017/12/05/strategic-high-skill-immigration/
At times the signal house was densely populated and a bunch of people got sick. These problems went away over time as some moved out, and we standardized better health practices (hand sanitizer freely available, people spreading out or working from their rooms if sick, etc).
I think it is better to assess personal fit for the bootcamp. There are a lot of advantages I think you can get from the program that would be difficult to acquire quickly on your own.
Aside from lectures, a lot of the program was self study, including a lot of my most productive time at the bootcamp, but there was normally the option to get help, and it was this help, advice, and strategy that I think made the program far more productive than what I would have done on my own, or in another bootcamp for that matter (I am under the impression longer bootca...
My model of the problem boils down to a few basic factors: