When considering writing a hypothetical apostasy or steelmanning an opinion I disagreed with, I looked around for something worthwhile, both for me to write and others to read. Yvain/Scott has already steelmanned Time Cube, which cannot be beaten as an intellectual challenge, but probably didn't teach us much of general use (except in interesting dinner parties). I wanted something hard, but potentially instructive.

So I decided to steelman one of the anti-sacred cows (sacred anti-cows?) of this community, namely inefficiency. It was interesting to find that it was a little easier than I thought; there are a lot of arguments already out there (though they generally don't come out explicitly in favour of "inefficiency"), it was a question of collecting them, stretching them beyond their domains of validity, and adding a few rhetorical tricks.

 

The strongest argument

Let's start strong: efficiency is the single most dangerous thing in the entire universe. Then we can work down from that:

A superintelligent AI could go out of control and optimise the universe in ways that are contrary to human survival. Some people are very worried about this; you may have encountered them at some point. One big problem seems to be that there is no such thing as a "reduced impact AI": if we give a superintelligent AI a seemingly innocuous goal such as "create more paperclips", then it would turn the entire universe into paperclips. Even if it had a more limited goal such as "create X paperclips", then it would turn the entire universe into redundant paperclips, methods for counting the paperclips it has, or methods for defending the paperclips it has - all because these massive transformations allow it to squeeze just a little bit more expected utility from the universe.

The problem is one of efficiency: of always choosing the maximal outcome. The problem would go away if the AI could be content with almost accomplishing its goal, or of being almost certain that its goal was accomplished. Under those circumstances, "create more paperclips" could be a viable goal. It's only because a self-modifying AI drives towards efficiency, that we have the problem in the first place. If the AI accepted being inefficient in its actions, even a little bit, the world would be much safer.

So the first strike against efficiency is that it's the most likely thing to destroy the world, humanity, and everything of worth and value in the universe. This could possibly give us some pause.

The measurement problem

The principal problem with efficiency is the measurement problem. In order to properly maximise efficiency, we have to measure how well we're doing. So we have to construct some system of measurement, and then we maximise that.

And the problem with formal measurement systems is that they're always imperfect. They're almost never exactly what we really want to maximise. First of all, they're constructed from the map, not the territory, so they depend on us having a perfect model of reality (little-know fact: we do not, in fact, have a perfect model of reality). This can have dramatic consequences - see, for instance, the various failures of central planers (in governments and in corporations) when their chosen measurement scale turned out to not correspond with what they truly wanted.

This could happen if we mix up correlations and causations - sadness cannot be prevented by banning frowns. But it's also true if a true causation stops being true in new circumstances - exercise can prevent sadness, but only up to a point. Each component of the measurement scale has a "domain of validity", a set of circumstances in which it corresponds truly to something desirable. Except that we don't know the domain of validity ahead of time, we don't know how badly it fails outside that domain, and we have only a very hazy and approximate impression of what "desirable" is in the first place.

On that last point, there's often a mixup between instrumental and terminal goals. Many things that are seen as "intrinsically valuable" also have great instrumental advantages (eg freedom of speech, democracy, freedom of religion). As we learn, we may realise that we've overestimated the intrinsic value of that goal, and that we'd be satisfied with the instrumental advantages. This can be best illustrated by looking at the past: there were periods when "honour", "reputation", or "being a man of one's word" were incredibly important and valuable goals. With the advent of modern policing, contract law, and regulations, this is far less important, and a once-critical terminal goal has been reduced to a slightly desirable human feature.

That was just a particular example of the general point that moral learning and moral progress have become impossible, once a measurement system has been fixed. So we better get it perfect the first time, or we're going in the wrong direction. And - I hope I'm not stretching your credibility too far here - we won't get it perfect the first time. Even if we allow a scale to be updated as we go along, note that this updating is not happening according to efficiency criteria (we don't have a meta-scale that provides the efficient way of updating value scales). So the most important part of safe efficiency comes from non-efficient approaches.

The proof the imperfection of measurement systems can be found by looking through the history of philosophy: many philosophers have come up with scales of value that they thought were perfect. Then these were subject to philosophical critiques that pointed out certain pathologies (repugnant conclusion! levelling down objection! 10100 variants of the trolley problem!). The system's creators can choose to accept these pathologies into their system, but they generally didn't think of beforehand. Thus any formal measurement system will contain unplanned for pathologies.

Most critically, what cannot be measured (or what can only be measured badly) gets shunted aside. GDP, for instance, is well known to correspond poorly with anything of value, yet it's often targeted because it can be measured much better than things we do care about, such as the happiness and preference satisfaction of individual humans. So the process of building a scale introduces uncountable distortions.

So efficiency relies of maximising a formal measurement system, while we know that maximising every single past formal system would have been a disaster. But don't worry - we've certainly got it right, this time.

 

Inefficient efficiency implementation

Once the imperfect, simplified, and pathology-filled measurement system has been decided upon, then comes the question of efficiently maximising it. We can't always measure exactly each component of the system, so we'll often have to approximate or estimate the inputs - adding yet another layer of distortion.

More critically, if the task is hard, it's unlikely that one person can implement it on their own. So the system of measurement must pass out of the hands of those that designed it, those that are aware of (some) of its limitations, to those that have nothing but the system to go on. They'll no doubt misinterpret some of it (adding more distortions), but, more critically, they're likely to implement it blindly, without understanding what it's for. This might be because they don't understand it, but the most likely option is because the incentives are misaligned: they are rewarded for efficiently maximising the measurement system, not the underlying principle. The purpose of the initial measurement system has been lost.

And it's not just that institutions tend to have bad incentives (which is a given), it's that any formal measurement system is exceptionally likely to produce bad incentives. It's because it offers a seemingly objective measure of what must be optimised, so the temptation is exceptionally strong to just use the measure, and forget about its subtleties. This reduces performance to a series of box ticking, of "teaching to the test" and other equivalents. There's no use protesting that this was not intended: it's a general trend for all formal measurement systems, when actually implemented in an organisation staffed by actual humans.

Indeed, Campbell's law (or Goodhart's law) revolve around this issue: when a measure becomes a target, it ceases to be a good measure. A formal standard of efficiency will not succeed in its goals, as it will become corrupted in the process of implementation. If it were easy to implement efficiency in a way that offered genuine gains, Campbell's law would not exist. This strongly correlates with experience as well: how often have some efficiency improvements achieved their stated goals, without causing unexpected losses? This almost never happens, whether they are implemented by governments or companies, individuals or institutions. Efficiency gains are never as strong as estimated ahead of time.

A further problem is that once the measurement system has been around for some time, it starts to become the standard. Rather than GDP/unemployment/equality being a proxy for human utility, it becomes a target in its own right, with many people coming to see it as an goal worth maximising/minimising in its own right. Not only has the implementation been mangled, but that mangling has ended up changing future values in pernicious directions.

Efficiency is not as efficient as it seems (a point we will return to again and again), which undermines its whole paradigm that things can be improved through measurements. Empirically, if we looked at the predictions made by efficiency advocates, we would conclude that they have failed, and that efficiency is a strange pseudo-science with far more credibility that it deserves. And in practice, it leads to wasted and pointless efforts by those implementing it.

 

A fully general counterargument

Fortunately, those espousing efficiency have a fully general counterargument. If efficiency doesn't work, the answer is... more efficiency! If efficiency falls short, then we must estimate the amount to which it falls short, analyse the implementation, improve incentives, etc... Do you see what's going on there? The solution to a badly implemented system of measurement, is to add extra complications to the system, to measure even more things, the add more constraints, more boxes to tick.

The beauty of the argument is that it cannot be wrong. If anything fails, then you weren't efficient enough! Plug the whole thing into a new probability distribution, go up one level of meta if you need to, estimate the new parameters, and you're off again. Efficiency can never fail, it can only be failed. It's an infinite regress that never leads to questioning its foundational assumptions.

 

Efficiency, management, and undermining things that work

Another sneaky trick that efficiency proponents use is to sneak in any improvement under the banner of efficiency. Did some measure fail to improve outcomes? Then bring in some competent manager to oversee its implementation, with powers to put things right. If this fails, then more efficiency is needed (see above); maybe we should start estimating the efficiency of management? If this succeeds, then this is a triumph of efficiency.

But it isn't. It's likely a triumph of management. Most likely, there was no complicated cost-benefit estimate that good management would improve things; this is a generally known fact. There are many sensible procedures that can bring great good to organisations, or improve implementations; generally speaking, the effects of these procedures can't be properly measured, but we do them anyway. This is a triumph of anti-efficiency, not of efficiency.

In fact, efficiency often worsens things in organisations, by undermining un-measured advantages were causing it to function smoothly (see also the Burkean critique, below). If an organisational culture is destroyed by adherence to rigid objectives, then that culture is lost, no matter how many disasters the objectives end up causing in practice. Consider, for instance, recognition-primed decision theory, used successfully by naval ship commander, tank platoon leaders, fire commanders, design engineers, offshore oil installation managers, infantry officers, commercial aviation pilots, and chess players. By its nature, it is inefficient (it doesn't have a proper measure to maximise, it doesn't compare enough options, etc...). So we have great performance, through inefficient means.

Yet if we insisted on efficiency (by, for instance, getting each of those professionals to fill out detailed paperwork justifying their decisions, or more training in classical decision theory), we would dramatically reduce performance. As more and more experts would get trapped in the new way of thinking (or of accounting for their thinking), the old expertise would whither away from disuse, and the performance of the whole field would degrade.

 

Everything else being equal...

Efficiency advocates have a few paradigmatic examples of efficiency. For instance, they set up a situation in which you can save one child for $100, or two for $50 each, conclude you should do the second, and then pat themselves on the back for being rational and kind. Fair enough.

But where in the world are these people who are standing in front of rows of children with $100 or $50 cures in their hands, seriously considering going for the first option? They don't exist; instead the problem is built by assuming "everything else being equal". But everything else is not equal; if it were, there wouldn't be a debate. It's precisely because so many things are not equal, that we can argue that, say curing AIDS in a Ugandan ten-month old whose mother was raped, is not directly comparable to curing malaria in two Brazilian teenagers who picked it up on a trip abroad. This is a particularly egregious type of measurement problem: only one aspect of the situation (maybe the number of lives saved, maybe the years of life gained, maybe the quality-adjusted years of life gained... notice how the measure is continually getting more complex?) is deemed worthy of consideration. And all other aspects of the problem are deemed unworthy of measurement, and thus ignored. And the judgement of those closest to the problem - those with the best appreciation of the whole issues - is suspect, overruled by the abstract statistics decided upon by those far away.

 

Efficiency for evul!

Now, we might want efficiency in our own pet cause, but we're probably pretty indifferent to efficiency gains for causes we don't care about, and we'd be opposed to efficiency gains to causes that are antithetical to our own. Or let's be honest, and replace "antithetical" with "evil". There are, for instance, many groups dedicated to building AGIs with (in the view of many on this list) a dramatic lack of safeguards. We certainly wouldn't want them to increase their efficiency! Especially since it's quite likely that they would be far more successful at increasing their "build an AGI" efficiency than their safety efficiency.

Thus, even if efficiency worked well, it is very debatable as to whether we want it generally spread. Just like in a prisoner's dilemma, we might want increased efficiency for us, but not for others; and the best equilibrium might be that we don't increase our own efficiency, and instead accept the status quo. If opponents suddenly start breaking out the efficiency guns, we can always follow suit and retaliate.

At this point, people might argue that efficiency, like science and knowledge itself, is a neutral force, that can be used for good or evil, and that how it is used is a separate problem. But I hope that people on this list have a slightly smarter understanding of the situation than that. There are such things as information hazards. If someone publishes detailed plans for building atomic weapons or weaponising anthrax or bird flu, we don't buy the defence that "they're just providing information; it's up to others to decide how it is used". Similarly, we can't go around promoting a culture of efficiency without a clear view of the entire consequences of such a culture on the world.

In practice it seems that a general lack of efficiency culture could be of benefit for everyone. This was the part of the essay where I was going to break out the alienation argument, and start bringing out the Marxist critiques. But that proved to be unnecessary. We can stick with Adam Smith:

The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgement concerning many even of the ordinary duties of private life... But in every improved and civilized society this is the state into which the labouring poor, that is, the great body of the people, must necessarily fall, unless government takes some pains to prevent it.

An Inquiry into the Nature and Causes of the Wealth of Nations (1776), Adam Smith

This is not unexpected. The are some ways of increasing efficiency that also increases the experience of the employees (Google seems to manage that). But generally speaking, efficiency is targeted at some measure that is not employee satisfaction (most likely the target is profit). When you change something without optimising for feature X, it is likely that feature X will do worse. This is partially because feature X is generally carefully constructed and low entropy, so any random change is pernicious, and partially because of limited resources: effort that goes away from X will reduce X. At the lower-to-mid level of the income scale, it seems that this pattern has been followed exactly: more and more jobs are becoming lousy, even as economic efficiency is rising. Indeed, I would argue that they are becoming lousy precisely because economic efficiency is rising. The amount of low income jobs with dignity is in sharp decline.

The contrast can be seen in the difference between GDP (easy to measure and optimise for) and happiness (hard to measure and optimise for). The modern economy has been transformed by efficiency drives, doubling every 35 years or so. But it's clear that human happiness has not been doubling every 35 years or so. The cult of efficiency has resulted in a lot of effort being put in inefficient directions in terms of what we truly value, with perverse results on the lower incomes. A little less efficiency, or at least a halt to the drive for ever greater efficiency, is certainly called for.

The proof can be seen in the status of different jobs. High status employees are much more likely to have flexible schedules or work patterns, to work without micromanaging superiors, and so on. Thus, as soon as they get enough power, people move away from imposed efficiency and fight to defend their independence and their right to not constantly be measured and directed. Often, this turns out to be better for their employers or clients as well. Reducing the drive towards efficiency results in outcomes that are better for everyone.

 

Self-improvement, fun, and games

Let's turn for a moment from the large scale to the small. Would you want more efficiency in your life? Many people on this list have made (or claimed) great improvements through improved efficiency. But did they implement these after a careful cost-benefit analysis, keeping careful track of their effects, and only making the changes that could be strictly justified? Of course not: most of the details of the implementation was done through personal judgement, honed through years of (inefficient) experience (no one tried hammering rusty nails into their hands to improve concentration - and not because of a "Rusty Nail self-hammering and concentration: a placebo controlled randomised trial" publication).

How do we know most of the process wasn't efficiency based? For the same reason that its so hard to teach a computer to do anything subtle: most of what we do is implemented by complicated systems we do not consciously control. "Efficiency" is a tiny conscious tweak that we add to a process that relies on massive unconscious processes, as well as skills and judgements that we developed in inefficient ways. Those who have tried to do more than that - for instance, those who have tried to use an explicit utility function as their decision criteria - have generally failed.

For example, imagine you were playing a game, and wanted to optimise this. One solution is to program a complicated algorithm to spit out the perfect play, guaranteeing your victory. But this would be boring; the game is no longer a challenge. Actually, what we wanted to optimise is fun. We could try to measure this (see the fully general counterargument above), but the measure would certainly fail, as we forgot to include challenge, or camaraderie, or long term re-playability, or whatever. It's the usual problem with efficiency - we just can't list all the important factors. And nobody really tries. Instead, we can do what we always do: try different things (different games, different ways of playing, etc...), get a feel for what works for us, and gradually improve our playing experience, without needing efficiency criteria. Once again, this demonstrates that great improvements are possible, without them being "efficiency gains".

In many people, there is a certain vicarious pleasure to see some project fail, if it was over-efficient, over-marketed, perfectly-designed-to-exploit-common-features. Bland and targeted Hollywood movies are like this, as are some sports teams; the triumph of the quirky, of the spontaneous, of the unexpected underdog, is something we value and enjoy. By definition, a complicated system that measures the "allowable spontaneity and underdog triumph" is not going to give us this enjoyment. Efficiency can never capture our many anti-efficiency desires, meaning it can never capture our desires, and optimising it would lose us much of what we value.

 

Burke, society, and co-evolution

Let's get Burkean. One of Burke's key insights was that the power and effectiveness of a given society are not found only in the explicit rules and resources. A lot of the strength is in the implicit organisation of society - in institutional knowledge and traditions. Constitutional rules about freedom of expression are of limited use, without a strong civil society that appreciates freedom of expression and pushes back at attempts to quash or undermine it. The legal system can't work efficiently without a culture of law-abiding among most citizens. People have set-up their lives, their debts, their interactions, in ways that best benefit themselves, given the social circumstances they find themselves in. Thus we should suspect that there is a certain "wisdom" in the way that society has organised itself; a certain resilience and adaptation. Countries like the US have so many laws we don't know how to count them; nevertheless, the country continues to function because we have reached an understanding as to which laws are enforced in which circumstances (so that the police investigates murder with more assiduity that suspected jaywalking, for instance). Without this understanding, neither the population nor the police could do anything, paralysed by uncertainty as to what was allowed and what wasn't. And this understanding is an implicit, decentralised object: its written down nowhere, but is contained in people's knowledge and expectations across the country.

Sweep away all these social structures in the name of efficient change, and a lot of value is destroyed - perhaps permanently. Transform the teaching profession into a chase for box ticking and test results, and the culture of good teaching is slowly eradicated, never to return even if the changes are reversed. Consider bankers, for instance. There have been legal changes in the last decades, but the most important ones were cultural, transforming banking from a staid and dull profession into a high risk casino (and this change was often justified in the name of economic efficiency).

The social capital of societies is being drained by change, and the faster the change (thus, the more strict we are at pursuing efficiency), the less time it has to reconstitute itself. Changing absolutely everything in the name of higher ideals (as happened in early communist Russia) is a recipe for disaster.

Having been Marxist/Adam Smithist before, let's also be social conservative for a moment. Drives for efficiency, whether direct or indirectly through capitalistic competition, tend to undermine the standard structures of society. Even without the Burkean argument above, these structures provide some value to many people. Some people appreciate being in certain hierarchies, in having society organised a particular way, in the stability of relationships within it. When you create change, some of these structures are destroyed and the new structures almost never provide equal value - at least at first. Even if you disagree with the social conservative values here, they are genuine values held by genuine people, who genuinely suffer when these structures are destroyed. And we all share these values to some extent: humans are risk averse, so that if you exchanged the positions of the average billionaire and the average beggar, the lost value from the billionaire would dwarf the gain for the beggar. A proposition to randomise the position of people in society, would never pass by majority vote.

Humans are complicated beings [citation needed], with complicated desires shaped by the society we find ourselves in. Our desires, our capital (of all kinds), our habits, all these have co-evolved with the social circumstances we find ourselves in. Similarly, our formal and informal institutions have co-evolved with the technological, social and legal facts of our society. As has been often demonstrated, if you take co-evolved traits and "improve" one of them, the result can often be disastrous. But efficiency seeks to do just that. You can best make change my making it less efficient, by slowing it down, and letting society and institutions catch up and adapt to the transformations.

 

The case for increasing inefficiency

So far, we have seen strong arguments for avoiding an increase in efficiency; but this does not translate into a case for increased inefficiency.

But it seems that this must be the case. First of all, we must avoid a misleading status quo bias. It is extraordinarily unlikely that we are currently at the "optimum level of efficiency". Thus, if efficiency is suspect, its just as likely that we would need to decrease it as we need to increase it.

But we can make five positive points in favour of increase inefficiency. The first is that increased inefficiency gives more scope for developing the cultural and social structures that Bruke valued and that blunt the sharp edge of changes. Such structures can never evolve if everything one does is weighed and measured.

Secondly there is the efficiency-resilience tradeoff. Efficient systems tend to be brittle, with every effort bent towards optimising, and none left in reserve (as it is a cardinal sin to leave any resource under-utilised). Thus when disaster strikes, there is little left over to cope, and the carefully optimised, intricate machinery, is at risk of collapsing all at once. A more inefficient system, on the other hand, has more reserves, more extras to draw upon, more room to adapt.

Thirdly, increased inefficiency can allow a greater scope for moral compromises. Different systems of morality can differ strongly on what the best course of action is; that means that in an "efficient" society, the standard by which efficiency is measured is the target of an all out war. Gain control of that measure of efficiency, and you have gained control of the entire moral framework. Less efficient societies allow more compromise, by leaving aside many issues around which there is no consensus: since we know that the status quo has a large inertia, the fight to control the direction of change is less critical. We generally see it as a positive thing that political parties lack the power to completely reorganise society every time they win an election. Similarly, a less efficient society might be a less unequal society, since it seems that gains in strongly efficient societies are distributed much more unevenly than in less.

Fourthly, inefficiency adds more friction in the system, and hence more stability. People value the stability, and a bit more friction in many domains - such as financial trades - is widely seen as desirably.

Finally, inefficiency allows more exploration, more focus on speculative ideas. In a world where everything must reach the same rate of return, and do so quickly, there is much less tolerance of variety or difference in approaches. Long term R&D investments, for one, are made principally by governments and by monopolies, secure in their positions. Blue sky thinking and tinkering are luxuries that efficiency seldom tolerates.

 

Conclusion

I hope you've taken the time to read this. Enjoyed it. Maybe while taking a bath, or listening to soft music. Savoured it, or savoured the many mistakes within it. That it has added something to your day, to your wisdom and understanding. That you started it at one point, then grew bored, then returned later, or not. But, above all else, that you haven't zoomed through it, seeking key ideas, analysing them and correcting them in a spirit of... you know. That thing. That "e"-word.

 

Real conclusion

I learnt quite a few things along the way of writing this apostasy, which was the point. Most valuable insight: the worth of "efficiency" is critically dependent on what improvements gets counted under that heading - at its not always clear, at all. We do have a general tendency to label far too many improvements as efficiency gains. If someone smart applies efficiency and gets better, was the smartness or the efficiency the key? I also think the "exploration vs exploitation" and the various problems with strict models and blind implementation are very valid, including the effect measurement can have on expertise.

I won't critique my own apostasy; I think others will learn more from the challenge of taking it apart themselves. As to whether I believe this argument - it's an apostasy, so... Of course I do. In some ways: I just found the bits in me that agreed with what I was writing, and gave them free reign for once. Though I had to dig very deep to find some of those bits (eg social conservatism).

EDIT: What, no-one taken the essay apart yet? Please go ahead!

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 9:22 PM

What, no-one taken the essay apart yet? Please go ahead!

Ok, I'll take a shot at it.

As far as I can tell, this article should be titled "why overfitting is a bad idea". You are absolutely right in saying that, lacking perfect knowledge, we are unable to produce a perfectly optimal solution to any given problem, and that any attempts to do so are bound to end in failure due to unintended consequences. But that is obsession, not efficiency.

Efficiency is a ratio of your expenditure of resources (which include money, time, oxygen, etc.) to the number of your true goals which you are able to achieve. Since none of us have perfect knowledge of our true goals, any strategy we implement must, by necessity, take uncertainty into account. This means that we must consider all the virtues you mentioned -- stability, friction, conservation, etc. -- as instrumental goals, and include them in our calculations. If we fail to do so, we risk producing a solution that is vastly inefficient: instead of helping us reach our true goals, the solution will reach some other goals which we mistakenly believe to be valuable.

The problem with standardized tests that you mentioned is a classic example of such overfitting. Our goal is to produce smarter students. We measure progress toward this goal by using a questionnaire, and we build our educational system to maximize scores on the questionnaire. After a few iterations, we find that our students get perfect scores, yet they are actually dumber than they were before we embarked on the project.

You claim that efficiency is the culprit here, but in fact, our solution turned out to be grossly inefficient. We wanted to make our students as smart as possible as quickly (and cheaply) as possible, but we have failed. The reason we failed is because we placed total trust in one questionnaire: we believed that this one test is 100% accurate at predicting future student performance, whereas in reality it is (for example) only 60% accurate. Thus, a more efficient solution should've included multiple educational strategies, evaluated over several generations, with the assistance of multiple metrics. Such a solution would've been slower, but efficiency is not synonymous with speed.

Cool, we're getting somewhere ^_^ and if I accused you of using the fully general efficiency counterargument, what would you say?

The fully general efficiency counterargument does not exactly apply here. It says:

If efficiency falls short, then we must estimate the amount to which it falls short, analyse the implementation, improve incentives, etc... Do you see what's going on there? The solution to a badly implemented system of measurement, is to add extra complications to the system, to measure even more things, the add more constraints, more boxes to tick.

While obtaining more accurate metrics can sometimes be good, the quest for too much accuracy can lead to the same kind of overfitting as the one I described previously. The solution, once again, is not to keep striving for more accurate metrics, but to start taking uncertainty into account.

To use the educational example again, if you launched into educational reforms knowing absolutely nothing about your students' performance, then you would most likely waste a bunch of resources, because you're operating in the dark, and your efficiency is very low. You could build up some metrics, such as standardized tests, that will allow you to estimate the performance of each student; but because these metrics are merely estimates, if you focus solely on optimizing the metrics you will most likely waste a bunch of resources (as per my previous comment).

Yes, you could collect increasingly more detailed metrics. You could begin collecting information about each student's genetics, diet, real-time GPS coordinates, etc. etc. However, these metrics are not free. The time, money, and CPU cycles you are spending on them could be spent on something else, and that something else would most likely get you a better payoff. In fact, at the extreme end of the spectrum, you could end up in a situation where all of your resources are going toward collecting and analyzing terabytes of data; and none of your resources are actually going toward teaching the students. This is not efficiency, this is waste.

Remember, efficiency is not defined as "measuring everything about your problem domain as accurately as possible", but rather as something like, "solving as many of your true goals as possible using as few resources as possible, while operating under uncertainty regarding your true goals, your resources, and your performance".

Good ^_^

Did you notice that the section criticising efficiency fully general counterarguments... was itself a fully general counterargument?

Hmm, no, I don't see that. If anything, that section is more of a straw-man. It cautions against excessive obsession with collecting data -- which can be a real problem -- but it assumes that collecting data is the only thing that efficiency is all about (as opposed to actually achieving your true goals).

The modern economy has been transformed by efficiency drives, doubling every 35 years or so. But it's clear that human happiness has not been doubling every 35 years or so.

I don't know, is it ? You just told us that happiness is nearly impossible to measure, so how do you know whether it increased or decreased and by how much ?

stretching them beyond their domains of validity, and adding a few rhetorical tricks.

I don't think it's a steelman if you extend the argument in a way that you don't actually agree with.

Isn't that a necessary part of steelmanning an argument you disagree with? My understanding is that you strengthen all the parts that you can think of to strengthen, but ultimately have to leave in the bit that you think is in error and can't be salvaged.

Once you've steelmanned, there should still be something that you disagree with. Otherwise you're not steelmanning, you're just making an argument you believe in.

Part of the point of steelmanning, as I understand it, is to see whether there is a bit that can't be salvaged. If you correct the unnecessary flaws and find that the strengthened argument is actually correct (and, ostensibly, change your mind), it seems appropriate to still call that process steelmanning. Or rather, even if it's not appropriate, people seem to keep using it like that anyway.

If you take a position on virtually any issue that's controversial or interesting, there will be weaknesses to your position. Actual weaknesses. I thought the purpose of steelmanning was to find and acknowledge those weaknesses, not merely give the appearance of acknowledging weaknesses. If that's not right, then I think we need a new word for the latter concept because that one seems more useful and truth seeking. If you're stretching things beyond the domains of validity and using tricks, it sounds awfully like you're setting up straw men, at the very least in your own mind. Seems more debate club than rationality.

A much better phrasing of what I was thinking.

But I think Kaj approach has some merit as well - we should find a name for "extracting the best we can from opposing arguments".

Yes, it would be a suit of armour stuffed with straw. On the other hand, I find nothing to disagree with in the actual arguments presented for inefficiency. This article could stand on its own without the steelman framing. All of its arguments can also be found elsewhere with their authors standing behind them instead of beside them.

  • Taleb has pointed out the efficiency-resiliency tradeoff.

  • The efficiency of kidnapping people because they are made of atoms we can put to better use has frequently been frowned on here.

  • The danger of effective government of whatever stripe has been commented on widely (fictionally in Frank Herbert's stories of Jorj X. McKie of the Bureau of Sabotage), even if it isn't a mainstream idea.

  • People seem to acknowledge the practical impossibility of building an actual global utility function.

  • That it becomes a totalitarian morality admitting no revision has been pointed out e.g. by C.S. Lewis. I have in mind where he says that every self-professed new morality turns out to be merely an elevation of one of the many components of what he calls the Tao into the only principle.

In a humorous vein, I recall Mark Twain's story of the old woman whose health was failing:

So I said she must stop swearing and drinking, and smoking and eating for four days, and then she would be all right again. And it would have happened just so, I know it; but she said she could not stop swearing, and smoking, and drinking, because she had never done those things. So there it was. She had neglected her habits, and hadn't any. Now that they would have come good, there were none in stock. She had nothing to fall back on. She was a sinking vessel, with no freight in her to throw over [to] lighten ship withal.

Well, as a hypothetical apostasy then? And I'm pretty sure you can give arguments you don't agree with a steelman (see Yvain's steelmanning of timecube).

Hmmm... I'll have a go. One response is that the "fully general counter argument" is a true counter argument. You just used a clever rhetorical trick to stop us noticing that.

If what you are calling "efficiency" is not working for you, then you are - ahem - just not being very efficient! More revealingly, you have become fixated on the "forms" of efficiency (the metrics and tick boxes) and have lost track of the substance (adopting methods which take you closer to your true goals, rather than away from them). So you have steelmanned a criticism of formal efficiency, but not of actual efficiency.

So you have steelmanned a criticism of formal efficiency, but not of actual efficiency.

Now we're getting somewhere :-)

Yay, I get another chance to post one of my favorite quotes!

There is nothing so useless as doing efficiently what should not be done at all.

-- Peter Drucker

Hi,

I'm currently writing a guide on “how to use steelmanning”. I've got a few strategies that I apply to force myself to find the best version of others arguments, but I would like to know if you have some ? (for example : “trying to explain the argument to someone else, to help convince myself that it's an argument that can be supported”).

N. M. nicomicaux@gmail.com

PS : very interesting post btw

I'd distill this into two valid admonitions, neither of which is really an objection to efficiency properly defined.

The first is "Beware efficiently achieving the wrong thing," covering the strongest argument and the measurement problem. I think this covers most objections to 'efficiency' in the wild; the protesters I once saw who wanted a company to less efficiently pursue profit, probably really wanted it to more efficiently pursue worker happiness.

The second is "Beware cargo cults of efficiency." All that stuff about adding more and more measurements and throwing out RPDT is attacking a cargo-cult definition of efficiency.

The truth is that efficiency correctly defined (as by Bugmaster, "a ratio of your expenditure of resources (which include money, time, oxygen, etc.) to the number of your true goals which you are able to achieve") really has a valid fully general counterargument in its defense. That is, if you are goal-directed and resource-bound, you want to be as efficient as possible. Complaining about this is like... I don't know... complaining about the patent office's fully general counterargument to all the perpetual motion machines you send in?

I think there is a third dimension here, which I would distill as:

"if you are not a unified agent but a congress, you should worry about politics, compromise, and game theory, not just efficiency."

I actually like this post and agree to most points you make. I'm not talking about the meta points about steelmanning and rhetoric tricks.

The obvious and clearly stated bias helped me to better insights than most articles that claim true understanding of anything.

I'm not sure whether this is due to increased attention to weak arguments or a greater freedom to ignore weak arguments as they are probably not serious anyways.

Can it be both? Was that effect intentional?

I would read a "Steelmanning counterintuitive claim X" series.

Interesting. Glad it seems to have given some new understanding!

But please believe me, though a lot of the individual points are very valid, I could shred my central thesis entirely.

The essay would probably improved by starting with some form of definition for inefficiency.

From what perspective would it be improved? Because there's a reason there's no definition in the essay...

For me it's very difficult to follow the text without asking myself: "What are you arguing at the moment? What's the position against which you are arguing?"

You're starting to see the weakest point of the whole essay ^_^

[-][anonymous]10y30

But that makes it hard to critique. "The strongest argument" was not a very strong argument at all, which put the rest of the article on very shaky ground.

[-][anonymous]10y00

In loving memory

The Enlightenment

circa 1650 -- 2014

The Enlightenment passed away in its home, reasoned discussion between curious minds, this morning after a successful suicide attempt. The philosophical tradition was approximately three and a half centuries old. The movement had a rich and varied career in government, science, technology, medicine, the arts, and indeed an overwhelmingly large majority of the advancements that have ever successfully made life on Earth better. Although only middle-aged for a body of ideas, the philosophy had become increasingly depressed and self-critical over its perceived shortfalls and pathologies over past decades, and sadly has finally succeeded in criticizing itself out of existence.

The Enlightenment is survived by religion, common sense, instinct, mysticism, The Way We've Always Done Things Round Here, and Shut Up And Learn Your Place. Flowers, cards, and other symbolic gestures that in fact don't help anyone can be sent to the address below...

[This comment is no longer endorsed by its author]Reply