For instance, it makes no sense to have a “no man left behind” policy in war, except that it’s really useful for motivational purposes. It leads to more dead people than it saves in the long term.
Do you also object on similar grounds to insurance for expensive medical conditions? "Man is left behind" is a dangerous medical condition with fewer microbes than usual, and the cost of the insurance premium is implicitly part of the servicemen's pay even though there isn't a line item for it.
Your reasoning could be used to argue that servicemen shouldn't get paid at all. Aside from motivational purposes, paying the soldiers costs money and accomplishes nothing, and there's an exchange rate between money and lives, so giving them pay has a real cost in lives.
That's a more complicated case, especially assuming that the insurance is opt-in. In an ideal world that would mean that the inefficiency also acts as a tax on irrationality.
Your example of serviceman pay isn't actually that far-fetched. For instance, Finland pays conscripts barely anything, between 6 and 14 euros per day depeding on rank [1] . Since the alternative is prison time [2] , there's indeed not much reason to pay. This has surprisingly low impact on morale.
Slightly more complicated: https://intti.fi/paivaraha-ja-varusraha ↩︎
In practice, something like house arrest with an ankle monitor ↩︎
In an ideal world that would mean that the inefficiency also acts as a tax on irrationality.
Why is the insurance inefficient? The whole point of insurance is to spread risk; some people get out more than they pay in (or could possibly pay in), because nobody knows in advance who's going to need the expensive procedure. If insurance couldn't do this, it wouldn't be "efficient"; it would be useless.
Oops, seems like I was wrong here. The dynamic doesn't extend to insurance, at least in general. Good point.
Then, my objection would be that the equvalence itself doesn't hold. Insurance is, supposedly, priced based on the actual risk. It also doesn't contain a negative feedback loop; people don't get sick more often because they pay for the insurance [1] . This is not the case for a "no man left behind" policy. The rescue operations cost, on expectation, more lives than they save. Since no money is involved, the policy cannot price in the risk. Of course this policy isn't absolute and in some cases isn't followed, when the ratio looks too bad. Not that a medical insurance company would burn hundred millions for a single patient either.
Not counting counterfactually using the money for preventative measures. ↩︎
Sometimes there's a solution that's otherwise superior, but people do not like it for "irrational" reasons. This second-order effect makes the solution worse in practice. This concept is mostly a parallel to Yudkowsky's Purchase Fuzzies and Utilons Separately.
For instance, it makes no sense to have a "no man left behind" policy in war, except that it's really useful for motivational purposes. It leads to more dead people than it saves in the long term. Sometimes we waste both money and the underlying utility bought because of this, for instance when forcibly extending lifespan of terminally ill patients. Many EA cause areas also exhibit these dynamics.
When considering such problems, it's often useful to disambiguate between optics and results. There's a recursive dependence here; the results require good-enough optics to work out. Sometimes optics can be bought cheaper than any other marginal improvement in results. Propaganda, for instance, is remarkably effective. Be wary of Chesterton's fence; sometimes the thing is not liked for a good reason.
If you're getting good results with methods that have bad optics, it often makes sense to do so discreetly. "All publicity is bad publicity"; it creates unwanted optimization pressure. The abolition of the death penalty against public opinion is an interesting example of this. In general, democracy forces representatives into this dilemma.
Often there's also a softer alternative that will lead to similar results. For instance, vice taxes have been rather effective at reducing smoking, without invoking the image of limiting personal choice. Public opinion can also be changed, but that's a lot of work. Decades of traffic safety campaigning have clearly been quite important in shaping attitudes about seatbelts and such.
The concept itself is somewhat prone to the dynamics it describes. If you get caught doing this you'll be (correctly) accused of deception. If you're open about it, it doesn't work and you'll still be (incorrectly) accused of deception. This makes legibility expensive. If you're forced to buy both results and optics at the same place, it limits viable methods.
This also applies on a personal level, but I'm reluctant to provide any examples for the above-mentioned reasons. I have written on mitigation methods in Perhaps you should suspect me as well and The Aura of A Dark Lord. These approaches might not be appropriate for you, in which case I'd suggest The Elephant in the Brain, which sadly has the potential downside of making you more aware of how you deceive others and thus worse at it.