Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

 

New Comment
238 comments, sorted by Click to highlight new comments since: Today at 10:31 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A novice asked master Banzen: “What separates the monk from the master?”

Banzen replied: “Ten thousand mistakes!”

The novice, not understanding, sought to avoid all error. An abbot observed and brought the novice to Banzen for correction.

Banzen explained: “I have made ten thousand mistakes; Suku has made ten thousand mistakes; the patriarchs of Open Source have each made ten thousand mistakes.”

Asked the novice: “What of the old monk who labors in the cubicle next to mine? Surely he has made ten thousand mistakes.”

Banzen shook his head sadly. “Ten mistakes, a thousand times each.”

The Codeless Code

Prominent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters... Nobody has [a care-o-meter] capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.

Nate Soares

6gjm9y
Now also posted to Less Wrong. (It hadn't yet been when Luke quoted it above.)

The Courage Wolf looked long and slow at the Weasley twins. At length he spoke, "I see that you possess half of courage. That is good. Few achieve that."

"Half?" Fred asked, too awed to be truly offended.

"Yes," said the Wolf, "You know how to heroically defy, but you do not know how to heroically submit. How to say to another, 'You are wiser than I; tell me what to do and I will do it. I do not need to understand; I will not cost you the time to explain.' And there are those in your lives wiser than you, to whom you could say that."

"But what if they're wrong?" George said.

"If they are wrong, you die," the Wolf said plainly, "Horribly. And for nothing. That is why it is an act of courage."

  • HPMOR omake by Daniel Speyer.
4MarkusRamikin9y
Nice. Where did you find that? Either Uncle Google is failing me, or I am failing Uncle Google.
5Salivanth9y
It's a comment on one of Eliezer Yudkowsky's Facebook posts. I got permission to post it here, as I thought it was worth posting.
3Flipnash9y
It was a reply to a post on Eliezer Yudkowsky's facebook.
2[anonymous]9y
I honestly cannot see how the mere existence of people wiser than myself constitutes a valid reason to turn off my brain and obey blindly. The vast majority of all historical incidences of blind obedience have ended up being Bad Ideas.

I believe this lesson is designed for crisis situations where the wiser person taking the time to explain could be detrimental. For example, a soldier believes his commander is smarter than him and possesses more information than he does. The commander orders him to do something in an emergency situation that appears stupid from his perspective, but he does it anyway, because he chooses to trust his commander's judgement over his own.

Under normal circumstances, there is of course no reason why a subordinate shouldn't be encouraged to ask why they're doing something.

I'm not sure that's the real reason a soldier, or someone in a similar position, should obey their leader. In circumstances that rely on a group of individuals behaving coherently, it is often more important that they work together than that they work in the optimal way. That is, action is coordinated by assigning one person to make the decision. Even if this person is not the smartest or best informed in the situation, the results achieved by following orders are likely to be better than by each individual doing what they personally think is best.

In less pressing situations, it is of course reasonable to talk things out amongst a team and see if anyone has a better idea. However even then it's common for there to be more than one good way to do something. It is usually better to let the designated leader pick an acceptable solution rather than spend a lot of time arguing about the best possible solution. And unless the chosen solution is truly awful (not just worse but actively wrong) it is usually better to go along with the leader designated solution than to go off in a different direction.

7dspeyer9y
"It can get worse, though, can't it?" Fred said, "Isn't that sort of following how people wound up working for Grindlewald?" "I am talking to you, not to those people. Have you ever come close to doing evil through excess obedience?" the Wolf asked. "We've hardly ever obeyed at all," George said. The Wolf waited for the words to sink in. "But not every act of courage is right," Fred said, "Just because someone is wiser than us doesn't seem like a reason to obey them blindly." "If one who is wiser than you tells you to do something you think is wrong, what do you conclude?" the Wolf asked patiently. "That they made a mistake," George said, as if it were obvious. "Or?" the wolf said. There was silence. The Wolf's eyes bore into the twins. It was clearly prepared to wait until they found the answer or the castle collapsed. "Or it could... conceivably... mean we've made... some kind of mistake," Fred muttered at last. "And which seems more likely?" "Wisdom isn't everything," George rallied, "maybe we know something they don't, or they got careless --" "Good things to think about," the Wolf interrupted, "but are you capable of thinking about them?" "What do you mean?" Fred asked. "Can you take seriously the idea that you might be wrong? Can you even think of it without my help?" "We'll try," George said. "There's more options, though," Fred though aloud, "We don't have to decide on our own whether we're wrong or they are -- we could talk to them. Couldn't we?" "Sometimes you can," the Wolf said, "and the benefits are obvious. Can you see the costs?" "It takes time, that we sometimes don't have" George said. "It could give you all away -- if you're trying to sneak past somebody and you start whispering, I mean," Fred said. "And it makes extra work for the leader. Overwhelming work if there are many followers," the Wolf added. "So it's another tradeoff," George said. "Now you understand. But understanding now and in this place is easy. What is hard is
0[anonymous]9y
Unfortunately, the Courage Wolf's existence proof for "people wiser than you" is nonconstructive: he has failed to give evidence that any particular person is wiser, and thus should be trusted.
1dspeyer9y
How to recognize someone wiser than you is indeed left as an exercise for the reader. And, yes, there will always be uncertainty, but you handle uncertainty in tradeoffs all the time. Are you seriously claiming the Weasely twins are the wisest characters in HPMoR?
8[anonymous]9y
They already listen to Dumbledore and McGonnagal, they're already wary of Quirrell, and frankly my actual wisdom rating for Harry (as opposed to raw intelligence that might eventually become wisdom with good training) is quite low. (You know that the only statements Eliezer himself actually endorses are those made about science and those made by Godric Gryffindor, right?)
7DanielLC9y
How do you figure? The more famous ones were Bad Ideas, but that's why they were famous.
3Emile9y
Do you have evidence to back that up? Seems to me that organisations with obedient members usually outperform those whose members question every decision; the exception being possibly those organisation who depend on their (non-leader) members being creative (e.g. software development), but those are a pretty recent development.
3[anonymous]9y
No, they are not a pretty recent development at all. The historical common-case is leaders taking credit for the good thinking of their underlings. And, frankly, your underestimation of the necessary intelligent thought to run most organizations is kinda... ugh.
1Emile9y
I agree that there are (probably a lot of) cases where creative thinking from rank-and-file members helps the organization as a whole; however my claim is that obedience also helps the organisation in other ways (coordinated action, less time spent on discussion, less changes of direction), and cases where the first effect is stronger than the second are rare until recently. i.e. (content warning: speculation and simplification!) you may have had medieval construction companies/guilds where low-level workers were told to Just Obey Or Else, and when they had good ideas supervisors took credit, but it's likely that if you had switched there organization to a more "democratic" one like (some) modern organisations, the organization as a whole would have performed less well. I don't have any in-depth knowledge of the history of organization, I just think that "The vast majority of all historical incidences of blind obedience have ended up being Bad Ideas" is a nice-sounding slogan but not historically true. I specifically referred to non-leader members, i.e. rank-and-file. Which is, like, the opposite of what you seem to be reading into my comment.
0[anonymous]9y
No, I was referring to the rank-and-file as well. Then we should ask someone who does. Then why did we switch, and why are our organizations more efficient in correlation with being more democratic?
0Emile9y
More education and literacy; a more complex world (required paperwork for doing anything...); more knowledge work.
-1Azathoth1239y
Truth of claim not in evidence.
2[anonymous]9y
Claim at least partially in evidence. Methinks your prior doth protest too much.
0Azathoth1239y
Then why haven't worker cooperatives replaced corporations as the main economic form?
0[anonymous]9y
Because the correct trade-off between ability to raise expansion capital via selling stock and maintaining worker control has not yet been achieved. Most current worker coops, for instance, do not have any structure for selling nonvoting stock, so they face a lot of difficulty in raising capital to expand.
0Lumifer9y
How will you recognize the "correct trade-off"?
0Azathoth1239y
How would a worker controlled coop expand? Would the new workers be given the same voting rights as the original workers? If so you have to ensure that the new workers have the same vision for how the coop should be run. Also, what do you do if market conditions require a contraction?
0[anonymous]9y
These questions are all answered in the existing literature.
0Strange79y
What about Honda?

When I was 16, I wanted to follow in my grandfathers footsteps. I wanted to be a tradesman. I wanted to build things, and fix things, and make things with my own two hands. This was my passion, and I followed it for years. I took all the shop classes at school, and did all I could to absorb the knowledge and skill that came so easily to my granddad. Unfortunately, the handy gene skipped over me, and I became frustrated. But I remained determined to do whatever it took to become a tradesman.

One day, I brought home a sconce from woodshop that looked like a paramecium, and after a heavy sigh, my grandfather told me the truth. He explained that my life would be a lot more satisfying and productive if I got myself a different kind of toolbox. This was almost certainly the best advice I’ve ever received, but at the time, it was crushing. It felt contradictory to everything I knew about persistence, and the importance of “staying the course.” It felt like quitting. But here’s the “dirty truth,” Stephen. “Staying the course” only makes sense if you’re headed in a sensible direction. Because passion and persistence – while most often associated with success – are also essential ingredients

... (read more)
0Lumifer9y
Yeah, see this :-)

"While there are problems with what I have proposed, they should be compared to the existing alternatives, not to abstract utopias."

Jaron Lanier, Who Owns the Future (page number not provided by e-reader)

5cousin_it9y
Huh? It would be more fair to compare proposals to other proposals, and existing things to other existing things.
5tut9y
Yes, compare existing proposals to existing proposals, as opposed to showing a flaw in one proposal and claiming that you have proven that it's bad when your alternative also is less than flawless.
-3[anonymous]9y
That's just an argument for letting the status quo impose the Anchoring Effect on us.
6DanielLC9y
It's an argument against the Nirvana fallacy. It's not saying that we should accept the status quo. Quite the opposite. It's saying that we should reject the status quo as soon as we have a better alternative, rather than waiting for a perfect one.
1[anonymous]9y
This depends on whether you are dealing with processes subject to entropic decay (they break apart and "die" without effort-input) or entropic growth (they optimize under their own power). For the former case, the Nirvana fallacy remains a fallacy; for the latter case, you are in deep trouble if you try to go with the first "good enough" alternative rather than defining a unique best solution and then trying to hit it as closely as possible.
3Richard_Kennaway9y
Maybe it should. That's what Chesterton's Fence is.

The version of Windows following 8.1 will be Windows 10, not Windows 9. Apparently this is because Microsoft knows that a lot of software naively looks at the first digit of the version number, concluding that it must be Windows 95 or Windows 98 if it starts with 9.

Many think this is stupid. They say that Microsoft should call the next version Windows 9, and if somebody’s dumb code breaks, it’s their own fault.

People who think that way aren’t billionaires. Microsoft got where it is, in part, because they have enough business savvy to take responsibility for problems that are not their fault but that would be perceived as being their fault.

-John D. Cook

The version of Windows following 8.1 will be Windows 10, not Windows 9. Apparently this is because Microsoft knows that a lot of software naively looks at the first digit of the version number, concluding that it must be Windows 95 or Windows 98 if it starts with 9.

Except that Windows 95 actual version number is 4.0, and Windows 98 version number is 4.1.

It seems that Microsoft has been messing with version numbers in the last years, for some unknown (and, I would suppose, probably stupid) reason: that's why Xbox One follows Xbox 360 which follows Xbox, so that Xbox One is actually the third Xbox, the Xbox with 3 in the name is the second one, and the Xbox without 1 is the first one. Isn't it clear?

Maybe I can't understand the logic behind this because I'm not a billionarie, but I'm inclined to think this comes from the same geniuses who thought that the design of Windows 8 UI made sense.

8ShardPhoenix9y
The programs causing the problem are reading the version name string, not the version number. Examples: https://searchcode.com/?q=if%28version%2Cstartswith%28%22windows+9%22%29
2V_V9y
But then Microsoft could just have set the new version string to "Windows9" or "Windows_9" or "Windows-9" or "Windows.9" or "Windows nine", etc., without messing with the official product name. I don't buy this was the issue.

No, this is due to their own code. A shortcut in the standard developer's tools (published by Microsoft) for Windows devs bring use 'windows 9' as a shortcut to windows 95 and windows 98. This is a problem of their own making.

4roystgnr9y
Microsoft got where it is, in part, by relying on the exact opposite user psychology. "What the guy is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is DR-DOS and then go out to buy MS-DOS."
1johnlawrenceaspden9y
Crikey, how does the dumb software react to running on Windows 1?
6Luke_A_Somers9y
I am rather doubtful that a noticeable number of programs are actually capable of running on both Windows 1 and Windows 10.
0ChristianKl9y
I think the core reason is marketing. Windows 10 sounds more revolutionary then switching from 8 to 9.
0A1987dM9y
Why not “Windows Nine”? :-)

Lord Vetinari, as supreme ruler of Ankh-Morpork, could in theory summon the Archchancellor of Unseen University to his presence and, indeed, have him executed if he failed to obey.

On the other hand Mustrum Ridcully, as head of the college of wizards, had made it clear in polite but firm ways that he could turn him into a small amphibian and, indeed, start jumping around the room on a pogo stick.

Alcohol bridged the diplomatic gap nicely. Sometimes Lord Vetinari invited the Archchancellor to the palace for a convivial drink. And of course the Archchancellor went, because it would be bad manners not to. And everyone understood the position, and everyone was on their best behaviour, and thus civil unrest and slime on the carpet were averted.

-- Interesting Times, Terry Pratchett

I want to say "live and let live" about non-scientific views. But, then I read about measles outbreaks in countries where vaccines are free.

Zach Weinersmith (Twitter)

Related:

Rather than panicking about the single patient known to have Ebola in the US, protect yourself against a virus that kills up to 50,000 Americans every year. It's the flu, and simply getting the shot dramatically reduces your chances of becoming ill.

Erin Brodwin Business Insider

-1Lumifer9y
That article about the flu "forgets" to mention a rather important fact: the effectiveness of the flu vaccine is only about 60%. In particular, with this effectiveness there will be no herd immunity even if you vaccinate 100% of the population.

So? A 60% reduction in the chances of getting the flu is still orders of magnitude better than a 100% reduction in the chances of getting ebola. Also, herd immunity isn't all-or-nothing. I'd expect giving everyone a 60% effective flu vaccine would still reduce the the probability of getting the flu by significantly more than 60%.

1A1987dM9y
I hear that herd immunity only really works when the percentage of people vaccinated is in the high 90s, but IANAD.
2DanielLC9y
According to the Wikipedia page on herd immunity, it seems to be that it generally has to be at about the 80s. But my point is that it's somewhat of a false dichotomy. Herd immunity is a sliding scale. Someone chose an arbitrary point to say that it happens or it doesn't happen. But there still is an effect at any size. IANAD, but I would expect a 60% reduction would still be enough for a significant amount of the disease to be prevented in the non-immune population. In fact, I wouldn't be surprised if it was higher. If you vaccinate 90% of the population, then herd immunity can't protect more than the remaining 10%.
9Lumifer9y
You can treat herd immunity as a sliding scale, but you can treat it as a hard threshold as well. In the hard threshold sense it means that if you infect a random individual in the immune herd, the disease does not spread. It might infect a few other people, but it will not spread throughout the entire (non-immunized) herd, it will die out locally without any need for a quarantine. Mathematically, you need a model that describes how the disease spreads in a given population. Plug in the numbers and calculate the expected number of people infected by a sick person. If it's greater than 1, the disease will spread, if it's less then 1, the disease will die out locally and the herd is immune.
4TheMajor9y
The spreading of deseases sounds like it would be modeled quite well using Percolation Theory, although on the applications page there is mention but no explanation of epidemic spread. The interesting thing about percolation theory is that in that model both DanielLC and Lumifer would be right: there is a hard cutoff above which there is zero* chance of spreading, and below that cutoff the chance of spreading slowly increases. So if this model is accurate there is both a hard cutoff point where the general population no longer has to worry as well as global benefits from partial vaccination (the reason for this is that people can be ordered geographically, so many people will only get a chance to infect people that were already infected. Therefore treating each new person as an independent source, as in Lumifer's expected newly infected number of people model, will give wrong answers). *Of course the chance is only zero within the model, the actual chance of an epidemic spread (or anything, for that matter) cannot be 0.
3othercriteria9y
I think percolation theory concerns itself with a different question: is there a path from starting point to the "edge" of the graph, as the size of the graph is taken to infinity. It is easy to see that it is possible to hit infinity while infecting an arbitrarily small fraction of the population. But there are crazy universality and duality results for random graphs, so there's probably some way to map an epidemic model to a percolation model without losing anything important?
3TheMajor9y
The main question of percolation theory, whether there exists a path from a fixed origin to the "edge" of the graph, is equivalently a statement about the size of the largest connected cluster in a random graph. This can be intuitively seen as the statement: 'If there is no path to the edge, then the origin (and any place that you can reach from the origin, traveling along paths) must be surrounded by a non-crossable boundary'. So without such a path your origin lies in an isolated island. By the randomness of the graph this statement applies to any origin, and the speed with which the probability that a path to the edge exists decreases as the size of the graph increases is a measure (not in the technical sense) of the size of the connected component around your origin. I am under the impression that the statements '(almost) everybody gets infected' and 'the largest connected cluster of diseased people is of the size of the total population' are good substitutes for eachother.
0othercriteria9y
In something like the Erdös-Rényi random graph, I agree that there is an asymptotic equivalence between the existence of a giant component and paths from a randomly selected points being able to reach the "edge". On something like an n x n grid with edges just to left/right neighbors, the "edge" is reachable from any starting point, but all the connected components occupy just a 1/n fraction of the vertices. As n gets large, this fraction goes to 0. Since, at least as a reductio, the details of graph structure (and not just its edge fraction) matters and because percolation theory doesn't capture the idea of time dynamics that are important in understanding epidemics, it's probably better to start from a more appropriate model. Maybe look at Limit theorems for a random graph epidemic model (Andersson, 1998)?
2Douglas_Knight9y
The statement about percolation is true quite generally, not just for Erdős-Rényi random graphs, but also for the square grid. Above the critical threshold, the giant component is a positive proportion of the graph, and below the critical threshold, all components are finite.
0othercriteria9y
The example I'm thinking about is a non-random graph on the square grid where west/east neighbors are connected and north/south neighbors aren't. Its density is asymptotically right at the critical threshold and could be pushed over by adding additional west/east non-neighbor edges. The connected components are neither finite nor giant.
0Douglas_Knight9y
If all EW edges exist, you're really in a 1d situation. Models at criticality are interesting, but are they relevant to epidemiology? They are relevant to creating a magnet because we can control the temperature and we succeed or fail while passing through the phase transition, so detail may matter. But for epidemiology, we know which direction we want to push the parameter and we just want to push it as hard as possible.
0Azathoth1239y
Not, quite, there are costs associated with pushing the parameter. We want to know at what point we hit diminishing returns.
5IlyaShpitser9y
How do you know there is no phase transition?
4A1987dM9y
And indeed the table you mention does shows ranges rather than points. But even the bottom of those ranges are far above 60%.
0A1987dM9y
Retracted after reading Kyre's comment that what applies to measles doesn't necessarily apply to flu.
7Kyre9y
I believe this is incorrect. The required proportion of the population that needs to be immune to get a herd immunity effect depends on how infectious the pathogen is. Measles is really infectious with an R0 (number of secondary infections caused by a typical infectious case in a fully susceptible population) of over 10, so you need 90 or 95% vaccination coverage to stop it speading - and why it didn't much of a drop in vaccination before we saw new outbreaks. R0 estimates for seasonal influenza are around 1.1 or 1.2. Vaccinating 100% of the population with a vaccine with 60% efficacy would give a very large herd immunity effect (toy SIR model I just ran says starting with 40% immune reduces attack rate from 35% to less than 2% for R0 1.2). (Typo edit)
-3bramflakes9y
I feel the Ebola article makes a false comparison. We have highly competent disease control measures that keeps Influenza's death toll bounded around the 50k order of magnitude per year. With Ebola, the curve still looks exponential rather than logistic - if the trend continues we'll have a 6-figure bodycount by January. A fairer comparison would be Ebola to 1918 Spanish Flu. (Oh and that isn't even taking into account that the officials have been feeding the media absolute horseshit about the "single patient" with Ebola)

Downvoted for mindless panic.

There are no measures to speak of to control the flu. It goes through the world every year and we just live with it because it's rarely fatal.

The Ebola curve is not exponential in the countries where appropriate measures were taken, Nigeria and Senegal: http://www.usatoday.com/story/news/nation/2014/09/30/ebola-over-in-nigeria/16473339/ Clearly the US can do at least as well.

While Ebola might mutate to become airborne and spread like flu, and there is a real risk of that, there is little indication of it having happened. Until then the comparison with the Spanish Flu is silly. It's not nearly as contagious.

Your linked post in the underground medic is pretty bad. The patient contracted Ebola on Sep 15, most people become contagious 8-10 days later, so the flight passengers on Sep 20 are very likely OK. There is no indication that the official story is grossly misleading. There are bound to be a few more cases showing up in the next week or so, just as there were with SARS, but with the aggressive approach taken now the odds of it spreading wide are negligible, given that Nigeria managed to contain a similar incident.

My guess is that the total number of cases with the Dallas vector will be under a dozen or so, with <40% fatalities. I guess we'll see.

3johnlawrenceaspden9y
Upvoted for the firm prediction. Confidence level?
4shminux9y
I would say 90% or so.
0shminux9y
... And it looks like I was right, if unduly pessimistic. Total new cases: 2, total new fatalities: 0. I expected at least some of the patient 0's relatives to get infected, and I did not expect the hospital's protection measures to be so bad. It looks like the strain they got there is not particularly infectious, which saved their asses.
2gsgs9y
the numbers of ebola cases were no longer exponential since mid Sept. instead they stay almost constant with ~900 new cases per week since Sep.14 This should have been clear to WHO and researchers at least since mid-Oct. Still they publically repeated the "exponential" forecasts , based on papers using old data. Ban ki Moon (on 2014/10/09) and Chan(on 2014/10/14) and Aylward said it. WHO until now puts forward their containment plan based on 5000-10000 new cases in the first week of december. They didn't correct it yet. according to Fukuda on 2014/10/23, the WHO-committee on 2014/10/22 on the third meeting of the International Health Regulations Emergency Committee regarding the 2014 Ebola outbreak in West Africa stated that there continued to be exponential increase of cases in Guinea,Liberia,Sierra Leone
2James_Miller9y
I'm far from an expert myself but unless, as you say, the experts are feeding us via the media "absolute horseshit" the expected number of U.S. deaths from Ebola is way below 50K.
3Richard_Kennaway9y
What countermeasures is that number conditional on being taken?
4James_Miller9y
What we seem to be doing but with significantly more countermeasures if the number of U.S. victims increases. Obama would suffer a massive political hit if > 1000 Americans die from Ebola and I trust that this is a sufficient condition to motivate the executive branch if things start to look like they could get out of control.
2Lumifer9y
Motivation may be necessary but it's not sufficient. The Federal Government is not exactly a shining example of competency.
-2soreff9y
Will the CDC handle Ebola like FEMA handled Katrina?

"You know, esoteric, non-intuitive truths have a certain appeal – once initiated, you’re no longer one of the rubes. Of course, the simplest and most common way of producing an esoteric truth is to just make it up."

West Hunter

2shminux9y
If it's so simple... mind making one up?

To stay young requires unceasing cultivation of the ability to unlearn old falsehoods

-- Robert Heinlein (http://tmaas.blogspot.co.uk/2008/10/robert-heinlein-quotes.html)

"Put simply, the truth about all those good decisions you plan to make sometime in the future, when things are easier, is that you probably won't make them once that future rolls around and things are tough again."

Sendhil Mullaainathan and Eldar Shafir, Scarcity, p. 215

To summarize Twitter and my Facebook feed this morning: “The Ebola virus proves everything I already believed about politics.” You might find this surprising. The Ebola virus is not running for office. It does not have a policy platform, or any campaign white papers on burning issues. It doesn’t even vote. So how could it neatly validate all our preconceived positions on government spending, immigration policy, and the proper role of the state in our health care system? Stranger still: How could it validate them so beautifully on both left and right?

Megan McArdle

“Nobody supposes that the knowledge that belongs to a good cook is confined to what is or may be written down in a cookery book.” - Michael Oakeshott, "Rationalism in Politics"

"What we assume to be 'normal consciousness' is comparatively rare, it's like the light in the refrigerator: when you look in, there you are ON but what's happening when you don't look in?"

Keith Johnstone, Impro - Improvisation and the Theatre

The words out of your mouth will literally be ignored, misheard, or even contorted to the opposite of what they mean, if that’s what it takes to preserve the listener’s misconception

Scott Aaronson on why quantum computers don't speed up computations by parallelism, a popular misconception.

The misconception isn't exactly that quantum computers speed up computations by parallelism. They kinda do. The trouble is that what they do isn't anything so simple as "try all the possibilities and report on whichever one works" -- and the real difference between that and what they can actually do is in the reporting rather than the trying.

Of course that means that useful quantum algorithms don't look like "try all the possibilities", but they can still be viewed as working by parallelism. For instance, Grover's search algorithm starts off with the system in a superposition that's symmetrical between all the possibilities, and each step changes all those amplitudes in a way that favours the one we're looking for.

For the avoidance of doubt, I'm not in any way disagreeing with Scott Aaronson here: The naive conception of quantum computation as "just like parallel processing, but the other processors are in other universes" is too naive and leads people to terribly overoptimistic expectations of what quantum computers can do. I just think "quantum computers don't speed up computations by parallelism" is maybe too simple in the other direction.

[EDITED to remove a spurious "not"]

1IlyaShpitser9y
I agree that "parallelism but in other universes" is a weird phrasing. What happens with quantum computation is cancellation due to having negative probabilities. The closest classical analogue seems to me to be dynamic programming, not parallel programming -- you have a seemingly large search space that in fact can be made to reduce into a smaller search space by e.g. cleverly caching things. In other words, this is about how the math of the search space works out. If your parallelism relies on invoking MWI, then it's not "real" parallelism because MWI is observationally indistinguishable from other stories where there aren't parallel worlds.
2Azathoth1239y
Negative (and imaginary) phase. The probability is the norm of the phase and is always positive.
1johnlawrenceaspden9y
I just don't think <-> I just think, or is this one of those American/British differences? Also, nice recursion in the grandparent.
3gjm9y
No, it's one of those right/wrong differences. I changed my mind about how to structure the sentence -- from "I don't think X is quite right" to "I think X is not quite right" -- and failed to remove a word I should have removed. (I seem to be having trouble with negatives at the moment: while trying the last sentence, my fingers attempted to add "n't" to both "should" and "have"!)
1gjm9y
Wait, American/British? I think we live within 10 miles of one another. Admittedly, I was born in the US, but I haven't lived there since I was about 4.
1johnlawrenceaspden9y
Ahh, the mysterious 'g'. Hi there. We really should have lunch sometime!
1gjm9y
Yup, 'tis I. (No, wait, I'm two letters of the alphabet off.) Yes, we should. At weekday lunchtimes I'm near the Science Park; how about you?
0johnlawrenceaspden9y
Consulting for the engineering department at the moment, but my time's my own, and I'm intrigued enough to put myself out. You choose place and time, and I'll try to be there. It may even be that we have better ways of communicating than blog comments! I am lesswrong@aspden.com, 07943 155029.
1Luke_A_Somers9y
Inserting a 'not' where it shouldn't be is not an American/British difference.
1johnlawrenceaspden9y
But is it not possible that whether it should or shouldn't be there is a matter of the dialect of the speaker?
4gjm9y
In general, of course it is. (I think "couldn't care less" / "could care less" is an example, though my Inner Pedant gets very twitchy at the latter.) But I think it's unusual to have such big differences in idiom, and I suspect they generally arise from something that was originally an outright mistake (as I think "could care less" was).
3Luke_A_Somers9y
And in particular, such a twisted usage does not fall neatly across the America/Britain divide. Especially in this particular case where it was pretty clearly an editing error.
-1[anonymous]9y
So Data can't set his phasor to NP-hard? :)

Still, it was possible that he could close in and thus block the Frenchman's blade.

No. Would he consider such a move if he did not have three ounces of fifteen-percent-alcohol purple passion in his bloodstream? No. Forget it.

Philip Jose Farmer's character, "Richard Francis Burton," The magic labyrinth

The chief trick to making good mistakes is not hide them -- especially not from yourself. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. The fundamental reaction to any mistake ought to be this: "Well, I won't do that again!" Natural selection doesn't actually think this thought; it just wipes out the goofers before they can reproduce; natural selection won't do that again, at least not as often. Animals that can learn -- learn not to make that noise, touch that wire, eat that food -- have something with a similar selective force in their brains. We human beings carry matters to a much more swift and efficient level. We can actually think that thought, reflecting on what we have just done: "Well, I won't do that again!" And when we reflect, we confront directly the problem that must be solved by any mistake-maker: what, exactly, is that? What was it about what I just did that got me into all this trouble? The trick is to take advantage of the particular details of the mess you've made, so that your next attempt

... (read more)
5Lumifer9y
Think he's a bit too enthusiastic about that X-D Making more grand mistakes in addition to my usual number doesn't look appealing to me :-/
3Stabilizer9y
I think he's implicitly restricting himself to philosophy. A "grand mistake" in philosophy has little ill effects.
9Azathoth1239y
Um, they've been known to result in up to a quarter of the world's population living under totalitarian dictatorships.
0Stabilizer9y
Fair enough. Good examples: Hegel --> Marx --> Soviet Union/China. Hegel --> Husserl --> Heidegger <---> Nazism.
2Lumifer9y
I don't know the context of the quote, but going just by the text quoted it doesn't look like this. That's a pretty severe put-down of philosophy :-D
1Emile9y
I didn't read it that way - when I read "seek our opportunities to make grand mistakes", the things I imagine are more like travel to foreign countries, try new things you're bad at, talk to people way outside your usual circle, etc.
4johnlawrenceaspden9y
Not disagreeing, but "The natural human reaction to making a mistake is embarrassment and anger (we are never angrier than when are angry at ourselves)" is weird. Why is the natural...anger? Also, is that even true for everyone? I make mistakes all the time and don't feel that, so I'm thinking he means "to publically taking a strong position and then being made to look like a fool", which I certainly do feel. But maybe not?
2gjm9y
If it's not true for you then it isn't true for everyone. But FWIW it's somewhat true for me (though "anger" is a strong word). I get cross at how unreliable my brain is.

The winner worldview is that you have responsibility for your own life and it is irrelevant who is at fault if the people at fault can't or won't fix the problem. I've noticed over the course of my life that winners ignore questions of blame and fault and look for solutions they can personally influence. Losers blame others for their problems and expect that to produce results.

Scott Adams musing on what that woman in the Manhattan harassment video could do.

This actually clashes with the idea of heroic responsibility, a popular local notion. I guess it depends on what your values are.

6DeterminateJacobian9y
Or what your skills are. People who are poor at soliciting the cooperation of others might begin to classify all actions which intend to change others' behavior as "blame" and thus doomed to fail, just because trying to change others' behavior doesn't usually succeed for them. What could the woman in the harassment video do? Maybe she could start an entire organization dedicated to ending harassment, and then stay in NY as a way to signal she is refusing to let the harassers win. Or if the tradeoff isn't worth it to her personally, leave as Adams suggests. She isn't making it Scott Adam's problem, she's making it the problem of anybody who actually wants it to also be their problem. That's how cooperation works, and people can be good or bad at encouraging cooperation, in completely measurable ways. Assigning irremediable blame, or refusing to encourage change at all are both losing solutions.
4somnicule9y
I don't exactly see how it clashes with heroic responsibility? "When you do a fault analysis, there's no point in assigning fault to a part of the system you can't change afterward, it's like stepping off a cliff and blaming gravity."
0shminux9y
Because it might seem to you that you cannot change it, but if you have Eliezer's do the impossible attitude, then maybe you can.
0Azathoth1239y
I can't tell if your misinterpreting him or if he real meant something that stupid. The problem with "doing the impossible" is that it amounts to an injunction to use all available and potentially available resources to address the problem. Of course, its impossible to do this for every problem.
0shminux9y
I don't think anyone implied "every problem". Only the one you think is really worth the trouble. Like FAI for Eliezer (or the AI-box toy example), or the NSA spying for Snowden. The risk, of course, is that the problem might be too hard and you fail, after potentially wasting a lot of resources, including your life.
2Nornagest9y
I think I buy this line of reasoning in general, but I don't think Adams is applying it correctly in this case. If group A is doing something that makes you unhappy because group B is rewarding them for it, then it is no more "winner behavior" to go after group B than group A: in both cases you're trying to get others to fix your problems for you, by adding a negative incentive in one case and by removing a positive incentive in another. I can make sense of this in a few ways: maybe Adams thinks at some level that B has agency as a group but A doesn't. (This is, clearly, wrong.) Or maybe he thinks that you're just more likely to convince members of B than members of A, which at least isn't obviously wrong but still requires assumptions not in evidence.
0ChristianKl9y
I think taking responsibility for everything whether or not you caused in is exactly what heroic responsibility is about. Apart from that Scotts get's a lot in the article wrong. In particular Scott argues: That's a naive view. It's probably wrong. To the extend that Eliezer argues "Do the impossible" he doesn't argue doing things that literally have 0% of success. TDT discourages doing things with 0% of success. Eliezer doesn't argue virtue ethics where it matters that you try regardless of whether you succeed. Not stopping with a naive view and actually working on the problem is something that Eliezer advocates and that's useful in cases like this. Even if it leads to questions that are even more politically incorrect then the ones Scott is asking.

if people use data and inferences they can make with the data without any concern about error bars, about heterogeneity, about noisy data, about the sampling pattern, about all the kinds of things that you have to be serious about if you’re an engineer and a statistician—then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for t

... (read more)

My greatest inspiration is a low bank balance.

Ludwig Bemelmens

3Richard_Kennaway9y
A similar thought from Heinlein: Source. I have heard both my father and my brother, professional musicians, mention the tremendous difference between professionals and amateurs. There is their differing levels of skill, of course, but the more fundamental difference is the seriousness that a professional brings to the work. There's nothing like having to put food on the table and a roof over your head, to give yourself that seriousness and get the work done, no matter what.

The humans aren't doing what the math says. The humans must be broken.

SMBC on the Ultimatum Game

2[anonymous]9y
Specifically, the human economists.
4James_Miller9y
But spherical cows of uniform density are so much easier to model.
9Lumifer9y
Scientific nirvana is spherical cows floating in vacuum under a streetlight :-D

“And therein lies the problem,” scowled the master. “Yesterday I was a fool, the week before an idiot, and last month an imbecile. Don’t show me code I might have written yesterday. Show me code as I will write it tomorrow.”

Qi at The Codeless Code

When you get to a fork in the road, take it.

(I will keep doing this. I have no shame.)

"... beware of false dichotomies. Though it's fun to reduce a complex issue to a war between two slogans, two camps, or two schools of thought, it is rarely a path to understanding. Few good ideas can be insightfully captured in a single word ending with -ism, and most of our ideas are so crude that we can make more progress by analyzing and refining them than by pitting them against each other in a winner-take-all contest."

  • Steven Pinker, on page 345 of The Sense of Style.
427chaos9y
Practically everyone is wary of false dichotomies. The trick is recognizing them. This quote doesn't help much with that.
3Tyrrell_McAllister9y
Practically everyone can be relied upon to go from "That's a false dichotomy" to "Therefore, I should be wary of it." However, being wary of false dichotomies means thinking, "That's a dichotomy. Therefore, the probability that it is false is sufficient to justify my thinking it through carefully and analytically." That is not something that practically everyone can be relied upon to do in general.
027chaos9y
I don't think the quote significantly increases the probability someone will have that thought. I think practically everyone here already has that habit of wariness. Maybe I'm wrong, typical mind fallacy, but identifying false dichotomies has always been rather automatic for me and I thought that was true for everyone (except when other biases are involved as well).

To be conscious that you are ignorant is a great step to knowledge.

Benjamin Disraeli.

But philosophers share the general human weakness for explanations of what is incomprehensible in terms suited for what is familiar and well understood, though entirely different.

Originally said by Thomas Nagel (I got it from Hofstadter and Dennett here )

This is a quote from memory from one of my professors in grad school:

Last quarble, the shanklefaxes ulugled the flurxurs. The flurxurs needed ulugled because they were mofoxiliating, which caused amaliaxas in the hurble-flurble. The shakletfaxes domonoxed a wokuflok who ulugles flurxurs, because wokuflok nuxioses less than iliox nuxioses.

  1. When did the shaklefaxes ulugle the flurxurs?
  2. Why did the shaklefaxes ulugle the flurxurs?
  3. Who did they get to ulugle the flurxurs?
  4. If you were the shaklefaxes, would you have your ulugled flurxurs? Why or why not?
  5. Wou
... (read more)

Physicists, in contrast with philosophers, are interested in determining observable consequences of the hypothesis that we are a simulation.

http://arxiv.org/abs/1210.1847 , Constraints on the Universe as a Numerical Simulation

8A1987dM9y
The LW software thinks the comma is part of the URL. Try escaping it with a backslash. Also, limits of Lorentz invariance violations from the ultra-high-energy cosmic ray spectrum are much weaker if you take into account the possibility that some of them are heavier nuclei rather than protons, as various lines of evidence suggest. There are very few solid conclusions we can draw from the experimental data we have. (This is what I am working on, BTW!)

"Information always underrepresents reality."

Jaron Lanier, Who Owns the Future? (page number not provided by e-reader)

6[anonymous]9y
What does this mean?
4Lumifer9y
Reality is always more complex than what you know of it.
4devas9y
The map is smaller than the territory? I think?
4johnlawrenceaspden9y
I bet there are big maps of small territories somewhere.
7devas9y
Physically? Maybe. information-wise? I heavily doubt it. If the map is bigger than the territory, why not go live in the map? :-/
3johnlawrenceaspden9y
Physically's easy enough, but even information-wise, I had a guide to programming the Z80 that wouldn't have fit in the addressable memory of a Z80, let alone the processor. Will that do? If not, we should probably agree definitions before debating.
3Strange79y
Would it have fit into less space than the set of possible programs for the Z80?
-2johnlawrenceaspden9y
That is a great point! I am grudgingly prepared to concede that sets are smaller than their power sets.
0khafra9y
-- Steven Kaas

Holmes: "What's the matter? You're not looking quite yourself. This Brixton Road affair has upset you."

Watson: "To tell the truth, it has," I said. "I ought to be more case-hardened after my Afghan experiences. I saw my own comrades hacked to pieces in Maiwand without losing my nerve."

Holmes: "I can understand. There is a mystery about this which stimulates the imagination; where there is no imagination there is no horror ."

  • From Conan Doyle's "a study in scarlet" (bold added by me for emphasis)

Chesterton's fence is the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood. The quotation is from Chesterton’s 1929 book The Thing, in the chapter entitled "The Drift from Domesticity":

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road.

... (read more)
6fubarobfusco9y
I've seen Chesterton's quote used or misused in ways that assume that an extant fence must have some use that is both ① still existent, and ② beneficial; and that it can only be cleared away if that use is overbalanced by some greater purpose. But some fences were created to serve interests that no longer exist: Hadrian's Wall, for one. The fact that someone centuries ago built a fence to keep the northern barbarians out of Roman Britain does not mean that it presently serves that purpose. Someone who observed Hadrian's Wall without knowledge of the Roman Empire, and thus the wall's original purpose, might correctly conclude that it serves no current military purpose to England. For that matter, some fences exist to serve invidious purposes. To say "I don't see the use of this" is often a euphemism for "I see the harm this does, and it does not appear to achieve any counterbalancing benefit. Indeed, its purpose appears to have always been to cause harm, and so it should be cleared away expeditiously."

One big problem with Chesterton's Fence is that since you have to understand the reason for something before getting rid of it, if it happens not to have had a reason, you'll never be permitted to get rid of it.

9fubarobfusco9y
Good point. Some properties of a system are accidental. ---------------------------------------- "We don't know why this wall is here, but we know that it is made of gray stone. We don't know why its builders selected gray stone. Therefore, we must never allow its color to be changed. When it needs repair we must make sure to use gray stone." "But gray stone is now rare in our country and must be imported at great expense from Dubiously Allied Country. Can't we use local tan stone that is cheap?" "Maybe gray stone suppresses zombie hordes from rising from the ground around the wall. We don't know, so we must not change it!" "Maybe they just used gray stone because it used to be cheap, but the local supplies are now depleted. We should use cheap stone, as the builders did, not gray stone, which was an accidental property and not a deliberate design." "Are you calling yourself an expert on stone economics and on zombie hordes, too!?" "No, I'd just like to keep the wall up without spending 80% of our defense budget on importing stone from Dubiously Allied Country. I'm worried they're using all the money we send them to build scary battleships." "The builders cared not for scary battleships! They cared for gray stone!" "But it's too expensive!" "But zombies!" "Superstition!" "Irresponsible radicalism!" "Aaargh ... just because we don't have the builders here to answer every question about their design doesn't mean that we can't draw our own inferences and decide when to change things that don't make sense any more." "Are you suggesting that the national defense can be designed by human reason alone, without the received wisdom of tradition? That sort of thinking led to the Reign of Terror!"
4Jack_LaSota9y
That, and for certain kinds of fences, if there is an obvious benefit to taking one down, it's better to just take it down and see what breaks, then maybe replace it if it wasn't worth it, than to try and figure out what the fence is for without the ability to experiment.
0wadavis9y
Devils advocating that somethings are without reason and that is an exception to the rule is a fairly weak straw man. Not having a reason is a simplification that does not hold up: Incompetence, apathy, out of date thinking, because grey was the factory default colour palette(credit to fubarobfusco), are all reasons. It is a mark of expertise in your field to recognize these reasonless reasons. Seriously, this happens all the time! Why did that guy driving beside me swerve wildly, is he nodding off, texting, or are there children playing around that blind corner? Why did this specification call for a impossible to source part, because the drafter is using european software with european part libraries in north america, or the design has a tight tolerance and the minor differences between parts matter.
0Jiro9y
What Chesterton actually said is that he wants to know something's use, and if you read the whole quote it's clear from context that he really does mean what one would consider as a use in the ordinary sense. Incompetence and apathy don't count. "Not having a reason" is a summary; summaries by necessity gloss over details.
4shminux9y
Right, this is indeed a misuse. The intended meaning is obviously that you ought to figure out the original reason for the fence and whether it is still valid before making changes. It's a balance between reckless slash-and-burn and lost purposes. This is basic hygiene in, say, software development, where old undocumented code is everywhere.
4fubarobfusco9y
Yep. On the other hand, in well-tested software you can make a branch, delete a source file you think might be unused, and see if all the binaries still build and the tests still pass. If they do, you don't need to know the original reason for that source file existing; you've shown that nothing in the current build depends on it. This is a bit of a Chinese Room example, though — even though you don't know that the deleted file no longer served any purpose, the tests know it.
0shminux9y
Yes, if you solve the Chesterton fence of figuring out why certain tests are in the suite to begin with. Certainly an easier task than with the actual code, but still a task. I recall removing failed (and poorly documented) unit and integration tests I myself put in a couple of years earlier without quite recalling why I thought it was a valid test case.
-1Azathoth1239y
Unfortunately, this doesn't work outside software. And even in software most of it isn't well tested.
2Lumifer9y
Sure it does -- that's how a lot of biological research works. Take some rats, delete a gene, or introduce a nutritional deficiency, etc. and see how the rats turn out.
3VAuroch9y
I agree that the quote is vague, but I think it's pretty clear how he intended it to be parsed: Until you understand why something was put there in the past, you shouldn't remove it, because you don't sufficiently understand the potential consequences. In the Hadrian's Wall example, while it's true that the naive wall-removing reformer reaches a correct conclusion, they don't have sufficient information to justify confidence in that conclusion. Yes, it's obviously useless for military purposes in the modern day, but if that's true, why hasn't anyone else removed it? Until you understand the answer to that question (and yes, sometimes it's "because they are stupid"), it would be unwise to remove the wall. And indeed, here, the answer is "it's preserved for its historical value", and so it should be kept.
1PeerGynt9y
At the risk of generalizing from fictional evidence: This line of reasoning falls apart when it turns out that the true reason for the wall is to keep Ice Zombies out of your kingdom. Chesterton would surely have seen the need be damn sure that the true purpose is to keep the wildlings out, before agreeing to reduce the defense at the wall.
-3Azathoth1239y
Um, people generally don't build fences to gratuitously cause harm.
6Jiro9y
That's either trivial, or false. It's trivial if you define "gratuitously cause harm" such that wanting someone else to be harmed always benefits oneself either directly or by satisfying a preference, and that counts as non-gratuitous. It's false if you go by most modern Westerners' standard of harm. There was no reason to limit Jews to ghettos in the Middle Ages except to cause harm (in sense 2).
9Vaniver9y
Er, this looks like a great example of not looking things up. Having everyone in a market dominant minority live in a walled part of town is great when the uneducated rabble decides it's time to kill them all and take their things, because you can just shut the gates and man the walls. Consider the Jewish ghettoes in Morocco:
-5Jiro9y
7Nornagest9y
The medieval allegations against Jews were so persistent and so profoundly nasty that they constitute a genre of their own; we still use the phrase "blood libel". It seems plausible that some of the people responsible for the ghetto laws believed them. They were entirely wrong, of course, but by the same token it may well turn out that Chesterton's fence was put there to keep out chupacabras. That still counts as knowing the reason for it.
0Jiro9y
That falls under case 1. It is always possible to answer (given sufficient knowledge) "why did X do Y". Y can then be called a reason, so in a trivial sense, every action is done for a reason. Normally, "did they do it for a reason" means asking if they did it for a reason that is not just based on hatred or cognitive bias. Were blacks forced to use segregated drinking fountains for a "reason" within the meaning of Chesterton's fence?
2Nornagest9y
No, I don't think it does. We can consider that particular cases of what we now see as harm may have been inspired by bias or ignorance or mistaken premises without thereby concluding that every case must have similar inspirations. Sometimes people really are just spiteful or sadistic. This just isn't one of those times. It seems clear to me, though, that Chesterton doesn't require the fence to have originally been built for a good reason. Pure malice doesn't strike me as a likely reason unless it's been built up as part of an ideology (and that usually takes more than just malice), but cognitive bias does; how many times have you heard someone say "it seemed like a good idea at the time"?
3gjm9y
Has been posted before, more than once.
3BenSix9y
It strikes me that one might simply presume the worst of whoever put up the fence. It was a farmer, for example, with a malicious desire to keep hill-walkers from enjoying themselves. I would extend the principle of Chesterton’s fence, then, to Chesterton’s farm: one should take care to assess the possible uses that it might have served for the whole institution around it as well as the motives of the man.
2Richard_Kennaway9y
It has appeared before, twice. Maybe it should have a Wiki article here.
5Gunnar_Zarncke9y
Appears every two years... when tho old quotes are two far down in the search results I guess. Done: http://wiki.lesswrong.com/wiki/Chesterton%27s_Fence

Germany’s plans in the event of a two front war [WW I] were the results of years of study on the part of great soldiers, the German General Staff. That those plans failed was not due to any unsoundness on the part of the plans, but rather due to the fact that the plans could not be carried out by the field armies.

An official Army War College publication, 1923

While reverse stupidity isn't intelligence, learning how others rationalize failure can help us recognize our own mistakes.

Edited to reflect hydkyll's comment.

How do you know it's a German Army War College publication? Reasons for my doubt:

  • "Ellis Bata" doesn't sound at all like a German name.

  • There was no War College in Germany in 1923. There were some remains of the Prussian Military Academy, but the Treaty of Versailles forbid work being done there. The academy wasn't reactivated until 1935.

  • The academy in Prussia isn't usually called "Army War College". However, there are such academies in Japan, India and the US.

2James_Miller9y
The link is from Strategy Page. I have listened to a lot of their podcasts and greatly respect them.
3gjm9y
But the link doesn't say it was from a German Army War College publication. It just says "In an official Army War College publication". All hydkyll's reasons for thinking it likely to be from another country seem strong to me.
3James_Miller9y
You are right, I added "German" for clarity because I assumed it was true given the context then forgot I had done this. Sorry.
7shminux9y
This is a common failure mode, where the risk analysis is ignored completely. Falling in love with a perfect plan happens all the time in industry. Premortem analysis was not a thing back then, and is exceedingly rare still.
-1ChristianKl9y
The context in with the sentence stands is that around that time there was the believe that the Germany army counted on being supported by other German institutions and those institutions didn't support the army but failed the army. This is commonly known as the stab-in-the-back myth. "Myth" as the winners of WWII wrote our history books. There nothing inherently irrational about that sentiment even though it might have been wrong. It's not about blaming the troops. If something seems so stupid that it doesn't make sense to you, it might be that the problem is on your own end.

I read the quote to mean that it's silly to claim that a plan is perfect when it's actually unworkable.

7James_Miller9y
This is my interpretation, similar to a teacher saying he gave a great lecture that his students were not smart enough to understand.
1ChristianKl9y
Given German thought at the time I find that unlikely. The author could have written: "We lost the war because Jews, Social Democrats and Communists backstepped us and not because we didn't have a good plan to fight two sides at once." He isn't that direct, but it's still the most reasonable reading for someone who writes that sentence in 1923 at a military academy in Germany.
1NancyLebovitz9y
I don't think I said what I meant, which is that the quote is a good example of irrational thinking.
9Lumifer9y
ChristianKI's point is that this quote is a good example of coded language (aka dogwhistle) and while it looks irrational on the surface, it's likely that it means "That those plans failed was not due to any unsoundness on the part of the plans, but rather due to the fact that we were betrayed".
1johnlawrenceaspden9y
Or it could be read ironically. It would be hard for anyone to disagree with it without looking bad, allowing the writer to say what he really thought (as in Atheism Conquered)
5taelor9y
Of note, Alfred von Schlieffen, the architect of the original deployment plan for war against France, was on record as recommending a negotiated peace in the event that the German Army fail to quickly draw the French into a decisive battle. Obviously, this recommendation was not followed. Also of note, Schlieffen's plan was explicitly for a one-front war; the bit with the Russians was hastily tacked on by Schlieffen's successors at the General Staff.
9DanArmak9y
No plans were made for a war even one year long (although highly placed individuals had their doubts and are now widely quoted about it). No German (or other) plans which existed at the start of WW1 were relevant to the way the war ended many years later. Conversely, whatever accusations were made about betrayal in the later years of the war were clearly irrelevant to the way those plans played out in 1914 when all Germans were united behind the war effort, including Socialists.
3chaosmage9y
While you're right, this all happened after Bismarck and the pre-WWI German government had put a lot of effort into avoiding a two-front war because they did not share the General Staff's optimism about being able to handle it. So this constitutes failing to admit losing a very high stakes bet, and does seem inherently irrational.
3James_Miller9y
My impression is that the German military was never optimistic concerning winning vs England, France, and Russia. Those that claimed WWI was deliberately initiated by Germany, however, had to falsely claim that the German military was optimistic.
2NancyLebovitz9y
Is it plausible that the German politicians ignored the German military?
2James_Miller9y
It's theoretically plausible, but from my understanding of WWI once the Russians mobilized the Germans justifiably believed that they either had to fight a two front war or allow the Russians to get into a position that would have made it extremely easy for Russia+France to conquer Germany.
3Luke_A_Somers9y
Right. The 'Blank Check' was the major German diplomatic screwup. Once the Austro-Hungarian Empire issued its ultimatum, they were utterly stuck.
1James_Miller9y
Agreed, although German further diplomatic errors contributed to England going against them. What they should have done is offer to let England take possession of the German fleet in return for England not fighting Germany and protecting Germany's trade routes.
1Luke_A_Somers9y
Ummmmm. That seems rather drastic, and would go over like something that doesn't go over.
3Protagoras9y
Indeed. A more plausible alternative strategy for Germany would be to forget the invading Belgium plan, fight defensively on the western front, and concentrate their efforts against Russia at the beginning. Britain didn't enter the war until the violation of Belgian neutrality. Admittedly, over time French diplomats might have found some other way to get Britain into the war, but Britain was at least initially unenthusiastic about getting involved, so I think Miller is on the right track in thinking Germany's best hope was to look for ways to keep Britain out indefinitely.
1RolfAndreassen9y
Eh, with perfect hindsight, maybe. The thing about Russia is, it has often been possible to inflict vast defeats on its armies in the field; but how do you knock it out of a war? Sure, in the Great War it did happen eventually - but the Germans weren't planning on multiple years of war that would stretch societies past their breaking point. (For that matter, in 1917 Germany was itself feeling the strain; it's called the "Turnip Winter" for a reason.) There were vast slaughters and defeats on the Eastern Front, true; but the German armies were never anywhere near Moscow - not even after the draconian peace signed at Brest-Litovsk. The German staff presumably didn't think there was any chance of getting a reasonably quick decision in Russia. Do note, when a different German leader made the opposite assumption, "it is only a question of kicking in the door, and the whole rotten structure will come tumbling down"... that didn't go so well either; and he didn't even have a Western front to speak of. It seems to me that Germany's "problems" in 1914 just didn't have a military solution; I put problems in scare quotes because they did have the excellent peaceful solution of keeping your mouth shut and growing the economy. It's not as though France was going to start anything.
1James_Miller9y
Not by itself, but France was very willing to support Russian aggression against the central powers.

The characteristic feature of all ethics is to consider human life as a game that can be won or lost and to teach man the means of winning.

Simone de Beauvoir, The Ethics of Ambiguity, Part I (trans. by Bernard Frechtman).

Cf. Rationality is Systematized Winning and Rationality and Winning.

What if the polls prove to have no bias? Our model shows Republicans as about 75 percent likely to win a Senate majority. This may seem confusing: Doesn’t the official version of FiveThirtyEight’s model have Republicans as about 60 percent favorites instead? Yes, but some of the 40 percent chance it gives Democrats reflects the possibility that the polls will have a Republican bias. If the polls were guaranteed to be unbiased, that would would make Republicans more certain of winning.

Nate Silver

0AshwinV9y
Nate Silver has a chapter in his book called Less and Less and Less wrong..... (or something very similar). PS. I haven't read it, but just happened to flip through the contents once long ago...

"If we take everything into account — not only what the ancients knew, but all of what we know today that they didn't know — then I think that we must frankly admit that we do not know. But, in admitting this, we have probably found the open channel."

Richard Feynman, "The Value of Science," public address at the National Academy of Sciences (Autumn 1955); published in What Do You Care What Other People Think (1988); republished in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (1999) edited by Jeffrey Robbins.

5Richard_Kennaway9y
I found the "open channel" metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong. I noticed that later in the passage, he says: This doesn't sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
4Vaniver9y
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
1Aiyen9y
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is "growing up" (recursively self-improving).

Thankfully, they have ways of verifying historical facts so this [getting facts wrong] doesn't happen too much. One of them is Bayes' Theorem, which uses mathematical formulas to determine the probability that an event actually occurred. Ironically, the method is even useful in the case of Bayes' Theorem itself. While most people attribute it to Thomas Bayes (1701 - 1761), there are a significant number who claim it was discovered independently of Bayes - and some time before him - by a Nicholas Saunderson. This gives researchers the unique opportunity to use Bayes' Theorem to determine who came up with Bayes' Theorem. I love science.

John Cadley, Funny You Should Say That - Toastmaster magazine

Slytherin, the hat had almost put him in, and his similarity to Slytherin's heir Riddle himself had commented on. But he was beginning to think this wasn't because he had "un-Gryffindor" qualities that fit only in Slytherin, but because the two houses - normally pictured as opposites - were in some fundamental ways quite similar.

Ravenclaws in battle, he had no doubt, would cooly plan the sacrifice of distant strangers to achieve an important objective, though that cold logic could collapse in the face of sacrificing family instead. Hufflepuffs w

... (read more)
6elharo9y
I understand the sentiment and why it's quoted. In fanboy mode though, I think Gryffindor and Ravenclaw are reversed here. I.e. a Gryffindor might sacrifice themself, but would not sacrifice a friend or loved one. They would insist that there must be a better way, and strive to find it. In fiction (as opposed to real life) they might even be right. The Ravenclaw is the one who does the math, and sacrifices the one to save the many, even if the one is dear to them. More realistically, the Ravenclaw is the effective altruist who sees all human life as equally valuable, and will spend their money where it can do the most good, even if that's in a far away place and their money helps only people they will never meet. A Ravenclaw says the green children being killed by our blue soldiers are just as deserving of life as our own blue children; and a Ravenclaw will say this even when he or she personally feels far more attached to blue children. The Ravenclaw is the one who does not reject the obvious implications of clear logic, just because they are unpopular at rallies to support the brave blue soldiers.

Nobody panics when things go "according to plan." Even if the plan is horrifying! If, tomorrow, I tell the press that, like, a gang banger will get shot, or a truckload of soldiers will be blown up, nobody panics, because it's all "part of the plan". But when I say that one little old mayor will die, well then everyone loses their minds!

-- Joker, The Dark Knight

[T]here are several references to previous flights; the acceptance and success of these flights are taken as evidence of safety. But erosion and blowby are not what the des

... (read more)

One of the things about the online debate over e-piracy that particularly galled me was the blithe assumption by some of my opponents that the human race is a pack of slavering would-be thieves held (barely) in check by the fear of prison sentences.

Oh, hogwash.

Sure, sure - if presented with a real "Devil's bargain," most people will at least be tempted. Eternal life. . . a million dollars found lying in the woods. . .

Heh. Many fine stories have been written on the subject! But how many people, in the real world, are going to be tempted to steal

... (read more)

How many people, in the real world, are going to be tempted to steal a few bucks?

Quite a lot, in my experience. I've seen so many well-paid people fired for fiddling their expenses over trivial amounts. Eric Flint, as befits a fiction author, makes a rhetorically compelling case though!

3ChristianKl9y
Even more take home with them papers or pens from their workplace and don't get punished for it.

Quite right, too.

Being able to take paper and pens home from the workplace to work is clearly useful and beneficial to the business. It's plainly not worth a business's time to track such things punctiliously unless its employees are engaging in large-scale pilfering (e.g., selling packs of printer paper) because the losses are so small. It's plainly not worth an employee's time to track them either for the same reason. (And similarly not worth an employee's time worrying about whether s/he has brought papers or pens into work from home and left them there.)

The optimal policy is clearly for no one to worry about these things except in cases of large-scale pilfering.

(In large businesses it may be worth having a formal rule that just says "no taking things home from the office" and then ignoring small violations, because that makes it feasible to fight back in cases of large-scale pilfering without needing a load of lawyering over what counts as large-scale. Even then, the purpose of that rule should be to prevent serious violations and no one should feel at all guilty about not keeping track of what paper and pens are whose. I suspect the actual local optimum in this vicin... (read more)

This post is right on the money. Transaction costs are real and often wind up being deceptively higher than you anticipate.

2VAuroch9y
Including legal concerns, the local optimum is probably officially stating that response will be proportional to seriousness of the 'theft', with a stated possible maximum. This essentially dog-whistles that small items are free to take, without giving an explicit pass. A better optimum might be what some tech company (I thought Twitter but can't find my source) that changed their policy on expense accounts for travel/food/etc. to 'use this toward the best interests of the company', to significant positive results. But some of the incentives there (in-house travel-agent arrangements are grotesquely inefficient) are missing here.
5gjm9y
I'm curious: why the downvote for the parent comment? It seems obviously not deserving of a downvote. ... Oh look, someone appears to be downvoting all VAuroch's comments. Dammit, this needs to stop.
2VAuroch9y
It's not nearly as bad as it used to be (I was one of Eugine_Nier's many targets), but yeah, it's frustrating.
4Larks9y
How is this a rationality quote? I can see people thinking this is a good argument, especially if you politically agreed with the author, but it doesn't seem to be about rationality, or demonstrating an unusually great deal of rationality

It would definitely be a rationality quote if it went on to quote the part where Eric Flint decided to test his hypothesis by putting some of his books online, for free, and watching his sales numbers.

3DanielLC9y
Does he say what the results were anywhere?
9dspeyer9y
Huge success. Sales jumped up in ways that are hard to explain as anything other than the free library's effect.
3dspeyer9y
It expresses two ideas: * Reduction to incentives is such a useful hammer that it's tempting to think of the world as homo economus nails. Like all simplified models, that can be useful, but it can also be dangerously wrong. * It isn't very much information to say that people have a price. The real information lies in what that price is. It may be true to say "people are dishonest", but if you want to win, you need to specify which people and how dishonest.
[-][anonymous]9y00

If you kick a ball, about the most interesting way you can analyze the result is in terms of the mechanical laws of force and motion. The coefficients of inertia, gravity, and friction are sufficient to determine its reaction to your kick and the ball's final resting place, even if you can 'bend it like Beckham'. But if you kick a large dog, such a mechanical analysis of vectors and resultant forces may not prove as salient as the reaction of the dog as a whole. Analyzing individual muscles biomechanically likewise yields an incomplete picture of human movement experience.

Thomas W. Myers in Anatomy Trains - Page 3

[This comment is no longer endorsed by its author]Reply

"What you can do, or dream you can do, begin it! / Boldness has genius, power and magic in it."

-- John Anster in a "very free translation" of Faust from 1835. (http://www.goethesociety.org/pages/quotescom.html)

Time is precious, but truth is more precious than time.

Benjamin Disraeli.

9johnlawrenceaspden9y
In what units?
0kpreid9y
Choice of units does not change relative magnitudes.
0johnlawrenceaspden9y
quite..