Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
731 comments, sorted by Click to highlight new comments since: Today at 8:45 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The fear people have about the idea of adherence to protocol is rigidity. They imagine mindless automatons, heads down in a checklist, incapable of looking out their windshield and coping with the real world in front of them. But what you find, when a checklist is well made, is exactly the opposite. The checklist gets the dumb stuff out of the way, the routines your brain shouldn’t have to occupy itself with (Are the elevator controls set? Did the patient get her antibiotics on time? Did the managers sell all their shares? Is everyone on the same page here?), and lets it rise above to focus on the hard stuff (Where should we land?).

Here are the details of one of the sharpest checklists I’ve seen, a checklist for engine failure during flight in a single-engine Cessna airplane—the US Airways situation, only with a solo pilot. It is slimmed down to six key steps not to miss for restarting the engine, steps like making sure the fuel shutoff valve is in the OPEN position and putting the backup fuel pump switch ON. But step one on the list is the most fascinating. It is simply: FLY THE AIRPLANE. Because pilots sometimes become so desperate trying to restart their engine, so crushed by the cognitive overload of thinking through what could have gone wrong, they forget this most basic task. FLY THE AIRPLANE. This isn’t rigidity. This is making sure everyone has their best shot at survival.

-- Atul Gawande, The Checklist Manifesto

I concur in the general case. But I would suggest the people complaining work in computers. I'm a Unix sysadmin; my job description is to automate myself out of existence. Checklist=shell script=JOB DONE, NEXT TASK TO ELIMINATE.

It turns out, thankfully, that work expands to fill the sysadmins available. Because even in the future, nothing works. I fully expect to be able to work to 100 if I want to.

when trying to characterize human beings as computational systems, the difference between “person” and “person with pencil and paper” is vast.

Procrastination and The Extended Will 2009

Am I missing something? Why is this quote so popular? Is there something more to it than "you can do harder sums with a pencil and paper than you can in your head"? Or, I guess "writing stuff down is sometimes useful".
Pencil and paper is far more reliable than your native memory, and also gives you a way to work on more than seven or so objects at once. Either one would expand your capabilities significantly. Taken together they're huge, at least when you're working with things that natural selection hasn't optimized you for (i.e. yes for abstract math; not so much for facial recognition).
Right - but did anyone not know that?

Facts which seem obvious in retrospect are often less salient than they appear, outside of their native contexts. If I'd been asked to describe humans as computational systems before reading the ancestor, pen and paper probably wouldn't be one of the things I'd have taken into account.

Yes. The paper is about the importance of environmental scaffolding on behavior. One of the topics it touches on is akrasia in college students, and it hypothesizes that this is because they lost their usual scaffolding - the routine of their homes, their parents, etc. The main point is that models of the human mind need to take into account the extent to which humans rely on external objects for computation. Paper and pencil are an extreme example of this. The quote itself has further implications. In my opinion, this is the single most important technological development. As far as I'm concerned, the "Singularity" began when humans began using things other than their brains to store and process information. That was the beginning of the intelligence explosion, that was the first time we started doing something qualitatively different. Everyone realizes that writing stuff down is useful, but since we do it all the time not everyone realizes what a big deal it is.. The important insight is that to write is to make the piece of paper a component of your memory and processing power.

I've got to start listening to those quiet, nagging doubts.


This phrase was explicitly in my mind back when I was generalizing the "notice confusion" skill.

When you were what?

In 2002, Wizards of the Coast put out Star Wars: The Trading Card Game designed by Richard Garfield.

As Richard modeled the game after a miniatures game, it made use of many six-sided dice. In combat, cards' damage was designated by how many six-sided dice they rolled. Wizards chose to stop producing the game due to poor sales. One of the contributing factors given through market research was that gamers seem to dislike six-sided dice in their trading card game.

Here's the kicker. When you dug deeper into the comments they equated dice with "lack of skill." But the game rolled huge amounts of dice. That greatly increased the consistency. (What I mean by this is that if you rolled a million dice, your chance of averaging 3.5 is much higher than if you rolled ten.) Players, though, equated lots of dice rolling with the game being "more random" even though that contradicts the actual math.

[-][anonymous]9y 18

What I mean by this is that if you rolled a million dice, your chance of averaging 3.5 is much higher than if you rolled ten.

The chance of averaging exactly 3.5 would be a hell of a lot smaller. The chance of averaging between 3.45 and 3.55 would be larger, though.

Your chance of averaging 3.5 to two significant figures seems quite high indeed, though.
Unless you're rolling an impractical number of dice for every attack having your attacks do random damage (and not 22-24 like in MMORPGs but 1X-6X) is incredibly random. Even if you are rolling a ridiculous number of dice the game can still be decided by one roll leaving a creature on the board or killing it by one or two points of damage. What maths says that rolling dice doesn't make the game more random? Maybe he means the game is overall less random, but I don't see any argument for that, or reference to evidence of that claim. If the reason for the game's failure was that people thought it lacked skill less additional randomness is not a decision to defend even if people were slightly overestimating the randomness. Having to roll dice in a card game is kind of a slap in the face too. In other card games you draw your cards then make the most of them. There's 0 randomness to worry about except right when you draw your card or your opponent draws theirs (but you are often happily ignorant of whether they play a card from their hand or that they drew except in certain circumstances.) You can count cards and play based on what is left in your deck, or you know is not in your deck anymore. Also, unlike miniature games, card games pretty much never start pre-deployed. You start with nothing on the board. If your turn one card kills his turn one card because of a dice roll then he has nothing on the board and you have a creature, giving you some level of control over the board (depends on the game, often quite high) In a miniature game if you kill more of his guys on turn one because of dice rolls you still have an army, though smaller. Why is this quote upvoted?
The more precise statement of "math says rolling more dice makes things less random" is that if you roll ten six-sided dice and add up the answer, the result will be less random (on its scale) than if you merely roll one six-sided die. Even more precisely: the outcome of 10d6 is 68.7% likely to lie in the range [30,40], while the outcome of 1d6 is only 33.3% likely to lie in the corresponding range [3,4]. I think the quoted portion of the article addresses exactly this point: people were scared of rolling many dice because this meant lots of randomness, but the math says that the opposite effect occurs. As to your other points (starting with "kind of a slap in the face"), that is addressed in the article, but not the quoted part. In summary: both rolling dice and drawing cards is random, but there's a bunch of reasons why the randomness of drawing cards isn't as frustrating. (It can be frustrating too, though.)
Maybe because of this part:
Rolling 10 dice instead of one makes the game less random. Rolling dice often instead of rarely makes the game more random. This game rolls dice for every attack and not that many. The dude said people complained about lots of dice rolling, not rolling lots of dice. Yeah, obviously if you roll 10 dice its less random than rolling one but what are the chances card game enthusiasts: people "geeky" enough to play star wars TCG don't understand that basic part of probability? It's far more likely that people were annoyed at lots of dice rolling, not the amount of dice you roll each time. Which matches the reported complaints of the players. Not that I'd expect an accurate report of the players positions when making excuses for why rolling dice in a card game is a bad idea.

When the axe came into the woods, many of the trees said, "At least the handle is one of us.

Turkish proverb

This analogy, this passage from the finite to infinite, is beset with pitfalls. How did Euler avoid them? He was a genius, some people will answer, and of course that is no explanation at all. Euler has shrewd reasons for trusting his discovery. We can understand his reasons with a little common sense, without any miraculous insight specific to genius.

  • G. Polya, Mathematics and Plausible Reasoning Vol. 1
See also the appendix “Mathematical Formalities And Style” in Probability Theory by E.T. Jaynes.

Once there was a miser, who to save money would eat nothing but oatmeal. And what's more, he would make a great big batch of it at the start of every week, and put it in a drawer, and when he wanted a meal he would slice off a piece and eat it cold; thus he saved on firewood. Now, by the end of the week, the oatmeal would be somewhat moldy and not very appetising; and so to make himself eat it, the miser would take out a bottle of good whiskey, and pour himself a glass, and say "All right, Olai, eat your oatmeal and when you're done, you can have a dram." Then he would eat his moldy oatmeal, and when he was done he'd laugh and pour the whiskey back in the bottle, and say "Hah! And you believed that? There's one born every minute, to be sure!" And thus he had a great savings in whiskey as well.

-- Norwegian folktale.

I don't understand this rationality quote. Is it about fighting akrasia? Self-hacking to effectively saving money? It clearly describes a method that wouldn't actually work, and it could work as humour, but what does it mean as a rationality tale?

It's a cautionary tale about Norwegian food.

It explains lutefisk.

Quote from Garrison Keillor's book Lake Wobegon Days: Every Advent we entered the purgatory of lutefisk, a repulsive gelatinous fishlike dish that tasted of soap and gave off an odor that would gag a goat. We did this in honor of Norwegian ancestors, much as if survivors of a famine might celebrate their deliverance by feasting on elm bark. I always felt the cold creeps as Advent approached, knowing that this dread delicacy would be put before me and I'd be told, "Just have a little." Eating a little was like vomiting a little, just as bad as a lot.

Quote from Garrison Keillor's book Pontoon: Lutefisk is cod that has been dried in a lye solution. It looks like the desiccated cadavers of squirrels run over by trucks, but after it is soaked and reconstituted and the lye is washed out and it's cooked, it looks more fish-related, though with lutefisk, the window of success is small. It can be tasty, but the statistics aren't on your side. It is the hereditary delicacy of Swedes and Norwegians who serve it around the holidays, in memory of their ancestors, who ate it because they were poor. Most lutefisk is not edible by normal people. It is reminiscent o

... (read more)
Obviously, that's why they were all above average! No, seriously, lutefisk is peasant food. Rich urban types eat smalahovve [].

Betcha it'd work. I'm going to set a piece of candy in front of me, work for half an hour, and then put it back, at least once a day for a week.

I sometimes find that telling my Inner Lazy that it can decide—after I've done the first one—between whether to continue a series of tasks or to stop and be Lazy gets me to do the whole series of tasks. Despite having noticed explicitly that in practice this 'decision delay strategy' leads to the whole series getting done, it still works, and rather seems like tricking my Inner Lazy to transition into/hand the reins over to into my Inner Agent.

Accountability check!

Did you do it? How'd it go?

Did it once, binge-ate the candy a few hours later, bought more candy, binge-ate it again. Trying again in two weeks (or going to the doctor if still prone to binging).

Oh, bother. I wish I'd seen this earlier.

In the context of LW, I took it as an amusing critique of the whole idea of rewarding yourself for behaviours you want to do more .

It's either a cautionary tale about the dangers of deceiving yourself, or a humorous look at the impossibility of actually doing so.

I took it to be about the hidden complexity of wishes: people often say they want to have more money left at the end of the month when what they actually mean is that they want to have more money left at the end of the month without making themselves miserable in the process, and the easiest solution to the former needn't be at all a solution to the latter.
It could be used as an effective "How to create an Ugh Field and undermine all future self-discipline attempts" instruction manual. It isn't a rationality tale. It is confusing that 40 people evidently consider it to be one. (But only a little bit confusing. I usually expect non-rationalist quotes that would be accepted as jokes or inspirational quotes elsewhere to get around 10 upvotes in this thread regardless of merit. That means I'm surprised about the degree of positive reception.)
I don't think you are correct. The miser knows each time he will not get the reward, and that he will save on food and drink. That is the real reward, and the rest is a kabuki play he puts on for less-important impulses, to temporarily allow him to restrain them in service of his larger goal. The end pleasure of savings will provide strong positive reinforcement. This could probably be empirically tested, to see if it is true and would work as a technique. I can imagine a test where someone is promised candy, and anticipates it while acting to fulfill a task, and then is rewarded instead with a dollar. Do they learn disappointment, or does the greater pleasure of money outweigh the candy? This is predicated on the idea that they would prefer the money, of course - you would need to tinker with amounts before the experiment might give useful results.

The miser knows each time he will not get the reward, and that he will save on food and drink. That is the real reward,

Also, don't forget his pleasure at successfully tricking himself. ;-)

Myself, I'd just spend the dollar on candy.
That is not the same thing as the quote. Empirically testing your candy and dollars reward switch would tell us next to nothing about the typical efficacy of the dubious self deception of the miser.
You are telling me I am wrong, but it is not helpful to me unless you explain why I am wrong. I thought it made sense. As far as I could tell, the original parable has a miser with two desires: the desire for delicious booze and the desire to save money. The latter desire is by far the more important one to him, so he "fools" his desire for booze by promising himself a booze reward, and then reneging on himself each time. In my interpretation, this still results in an overall positive effect for self-discipline, because the happiness of saving money is so much more important to the miser than the disappointment of missing the booze reward. The truth of whether this would actually work could be seen in an experiment. I tried to think of one with two rewards that satisfy different desires, and tried to think of a way to slightly disappoint the desire for sugar while strongly rewarding the impulse for money, after the completion of the task. Maybe I should specify that people should be hungry before the task, and tested in the future when they are hungry, to see if they are still willing to complete the task?
That's one way it could play out. It feels like this thinking also allows for it to work, because one might feel good about what got done by means of the trick, which would positively reinforce being tricked. I think the matter isn't clear cut.
It's interesting to view this story from source-code-swap Prisoner's Dilemma / Timeless Decision Theory perspective. This can be a perfect epigraph in an article dedicated to it.
4Ben Pace9y
I thought the way he deceived his conscious mind, and never learned, was interesting.

Now, now, perfectly symmetrical violence never solved anything.

--Professor Farnsworth, Futurama.

The threat of massive perfectly symmetrical violence, on the other hand...

Such a threat can also be effective for asymmetrical violence -- no matter which way the asymmetry goes.

He took literally five seconds for something I'd spent two weeks on, which I guess is what being an expert means

-- Graduate student of our group, recognising a level above his own in a weekly progress report

Now I'm curious about the context...

It wasn't very interesting - some issue of how to make one piece of software talk to the code you'd just written and then store the output somewhere else. Not physics, just infrastructure. But the recognition of the levels was interesting, I thought. Although I do believe "literally five seconds" is likely an exaggeration.

It's a horrible feeling when you don't understand why you did something.

-- Dennis Monokroussos

It's probably a much more accurate feeling than the opposite one, though...

If I understand why I did something, I want to believe ...
That is an interesting observation. For my part I do not experience horror in those circumstances, merely curiosity and uncertainty.

I think it may depend a lot on how well the action fits into your schema for reasonable behavior.

I have mild OCD. Its manifestations are usually unnoticeable to other people, and generally don't interfere with the ordinary function of my life, but occasionally lead to my engaging in behaviors that no ordinary person would consider worthwhile. The single most extreme manifestation, which still stands out in my memory, was a time when I was playing a video game, and saved my game file, then, doubting my own memory that I had saved it, did it again... and again... and again... until I had saved at least seven times, each time convinced that I couldn't yet be sure I had saved it "enough."

Afterwards, I was horrified at my own actions, because what I had just done was too obviously crazy to just handwave away.

I used to do that a lot. I still have to fight the urge to save repeatedly when nothing has changed. My obsessive compulsions are mostly mental though so it has had so little an impact on my interactions with others that I don't think it counts as a disorder.
For me it fits my schema of reasonable behavior but also into my schema of "things other people may not like doing for which I don't consider them irrational". Of course, I would rarely consider using a dollar as a bookmark. That would require stopping reading the book once I started it.
It depends on the context, in particular, whether the situation is one where you "must" have a good reason for your actions. Your reaction is appropriate for most ordinary situations; his is appropriate for the context he's talking about (doing a different movement than than the one you intended in a chess game) and other high stakes situations (blurting an answer you know is wrong in an examination, saying/doing something awkward on a date, making a risky movement driving your car…)

his is appropriate for the context he's talking about (doing a different movement than than the one you intended in a chess game) and other high stakes situations (blurting an answer you know is wrong in an examination, saying/doing something awkward on a date, making a risky movement driving your car…)

I experience horrible feelings when I humiliate myself or put myself at risk. This phenomenon seems to occur independently of whether I have a good causal model for why I did those things.

OTOH, a good causal model may sometimes enable you to take action so as to not do that thing again.

He wasn't certain what he expected to find, which, in his experience, was generally a good enough reason to investigate something.

Harry Potter and the Confirmed Critical, Chapter 6

Can you give a link to this story? It is surprisingly difficult to find.
It is the second book in the series Harry Potter and the Natural 20, which can be found here [].
If you put the quote into quotation marks and search Google, it's the fifth hit.
thank you. This was a 'duh!' moment; I haven't realized it was the 2nd book of the Natural 20.

Some say imprisoning three women in my home for a decade makes me a monster, I say it doesn’t, and of course the truth is somewhere in the middle.

Ariel Castro (according to The Onion)

"So let's split the difference and say I should have stopped at two."
Is this just supposed to be a demonstration of irrationality? Can some one unpack this?
A demonstration of the gray fallacy. [] The opinions of Ariel Castro are not equidistant from the truth with those of the rest of society, and we don't find the truth by finding a middle ground between his claims and those of everybody else.
I don't know how this happened. My comment was supposed to be a reply to:

Ah. I read that one as a reference to the tendency to let tribal affiliation trump realistic evaluation of outcomes.

Far too many people are looking for the right person, instead of trying to be the right person.

-Gloria Steinem

I read that as "looking for the right person to fall in love with". Then the sense is "be the right person for someone else". But that achieves a different goal entirely, since it doesn't make the other person right for you.

There are many cases where you want a different person right for the task.

Name three!

Romantic partners (inherently), trading and working partners (allowing you to specialize in your comparative advantage), deputies and office-holders (allowing you to deputize), soldiers (allowing you to send someone else to their death to win the war).

I assume the original intent of the quote was about romantic partners, where it means, "Instead of searching so hard, make sure to prioritize being awesome for its own sake." I was trying to repurpose it to express that action is better than preparing for something to fall into place more generally, and I think it's appealed to people.
I originally read it as being about politics. We keep thinking that somewhere there's a candidate worth voting for, and then things will be ok, but instead we should be trying to become the worthy candidates, even if only for local office. Or perhaps toward improving the world generally. Instead of deciding whether to pay Yudkowsky or Bostrom to work on existential risk, we should try applying our own talents. Similar to "[T]he phrase 'Someone ought to do something' was not, by itself, a helpful one. People who used it never added the rider 'and that someone is me'." Skimming Gloria Steinem's biography, I am more confident in this reading.
How isn't "looking for" or "searching hard" action?
You still have to be the right person to be the right person in a team....?
But you don't have to be perfect to be the right person in a team, and you don't have to be "the" right person to be an asset to a team. People with low self-confidence plus low social confidence (plus possibly moralistic ideas about self-reliance) will try to self-improve through their own efforts rather than seeking help, regardless of how much less effective it is, believing they're not worth someone else's attention yet, or being afraid of owing someone, or whatever; quotes like Steinem's reinforce that. ...Maybe. I don't have any actual sources, so I could be totally wrong. Still, I'm not sure I like the focus on "being" rather than doing things.
Who said anything about being perfect? And if you're an asset, you sound prettymuch like the right person to me. To me the clause "be the right person" sounds very much active/action-based.
Completely putting teamwork aside, most major contributions to humanity were achieved by standing on the shoulders of those who came before.

A man who says he is willing to meet you halfway is usually a poor judge of distance.


This could be studied empirically.

Difficult. The "distance" is metaphorical, and this probably doesn't apply when there's an easy, unambiguous, generally accepted metric. Without that, how do we do the study? Still, if you have a way, it could be interesting.

In a famous study, spouses were asked, “How large was your personal contribution to keeping the place tidy, in percentages?” They also answered similar questions about “taking out the garbage,” “initiating social engagements,” etc. Would the self-estimated contributions add up to 100%, or more, or less? As expected, the self-assessed contributions added up to more than 100%.

-Daniel Kahneman, Thinking, Fast and Slow

On the other hand, the book doesn't give a citation, and searching for the exact text of the question turns up only that passage. Not sure what to make of that.

Ross & Sicoly (1979). Egocentric Biases in Availability and Attribution.

In the study, the spouses actually estimated their contributions by making a slash mark on a line segment which had endpoints labelled "primarily wife" and "primarily husband". The experimenters set it up this way, rather than asking for numerical percentages, for ethical reasons. In pilot testing using percentages, they "found that subjects were able to remember the percentages they recorded and that postquestionnaire comparisons of percentages provided a strong source of conflict between the spouses." (p. 325)

If there is no easy, unambiguous generally accepted metric, that would seem to imply that everyone is a poor judge of distance - making the quote trivially true.
Or thinks he's got better leverage than you.

Subsidizing the markers of status doesn’t produce the character traits that result in that status; it undermines them.

Reynolds' law

Status markers frequently indicate unusual access to resources as well as or even instead of character traits. Subsidizing status markers dilutes them by making them less common. How would you tell which factor is more important in the dilution of a status marker?
I can't parse your post, but that may be partly because I don't understand how subsidizing status markers would produce character traits to begin with.

Eugine_Nier's comment has the suppressed premise that status usually results from character traits (alone, or primarily). NancyLebovitz's response contradicts this suppressed premise.

If you get rich by being exceptionally virtuous, then redistributing the wealth will make it less obvious who is virtuous.

But if you get rich by having a rich dad, then redistributing the wealth will merely make it less obvious who had a rich dad.

I think the point is that it wouldn't. You can have character traits, i.e. conscientiousness, that result in status markers, i.e. having saved a lot of money. If you make it easier for people to get the specific status marker, i.e. welfare, the causal arrow doesn't go in reverse and increase conscientiousness. You could expect it to have no effect, i.e. if conscientiousness and other traits are innate and entirely determined by age 4. (That's kind of my default). Or, in a slightly more complicated world where conscientiousness can vary depending on environment , i.e. there are a bunch of causal arrows bouncing around in confusing ways, "diluting" the status marker by making it easier to acquire might reduce the incentive to have the underlying trait, and make people less conscientious over time. I've heard the argument that this happens to people on welfare, although I'm tempted to say "correlation not causation"–>who ends up on welfare in the first place already depends on conscientiousness.

At least in the US, saving money can disqualify you from welfare.

When my best friend was on welfare, they would take what she had earned at her part-time job the last month and subtract half that amount from her welfare. So there was still an incentive to work, albeit less. I don't know to what degree she had to submit her budget or expenses to them (i.e. that they would actually know if she was saving money), but in general they seemed to make it as hard as possible to actually stay on Welfare.
That's about income, not savings.
I don't know what the policy was on savings-i.e. to what degree, if at all, they would reduce her monthly amount if she submitted her budget each month and was spending less. I get the impression that it's kind of a basic fixed rate for, i.e., adult not in school with one child...and that it's realistically not enough to save, even if you spend nothing on discretionary purchases or fun. She got around $900 a month, of which $550 alone went towards her part of our rent. If she'd, for example, made $500 per paycheck (25 hours a week at Canadian mininum wage), that would make $1000 a month, so they'd take $500 off her welfare payment, for a monthly total income of $1400...which is enough to save at least a small amount per month, given our shared living expenses. In the US welfare system, would they cancel your welfare if you were able to save $200 a month of this total? They did keep cancelling the welfare for unrelated reasons. (Example: her parents had had an education fund for her of about $10,000, but they'd spent it all on her wedding, and they sent her a letter saying her welfare was cancelled until she could submit documents proving this. Not a warning-cancelled. She missed a month or two before submitting the documents, and eventually gave up and just worked more hours.)
6NancyLebovitz9y [] Short version: it varies quite a bit by state, but some major benefits in a fair number of states have a personal asset limit of two or three thousand dollars.
Thanks! So it looks like there's a limit but at least someone thinks it's a bad idea and some states are changing it... According to this [] , the asset limit to qualify for Ontario Works (welfare) is $572 for a single adult and $1,550 for a lone parent. So, worse than is the US... (But it was $2500 for a single adult in 1981...) The 50% earning exemption is new from 2003 though. Wow I have learned things today!
I've never provided any information about savings when applying for welfare. What organization has that policy?
See also: Credential Inflation []

Rin: "Even I make mistakes once in a while."

Shirou (thinking): ...This is hard. Would it be good for her if I correct her and point out that she makes mistakes often, not just once in a while?

Fate/stay night

He just needs to get Saber to say it. Saber often tells people, in a bluntly matter-of-fact way, that they're making a mistake. Rin knows this. If Shiro said it, though, she'd think it was some kind of dominance thing and get mad. (Maybe I'm over-analyzing this.)
Slightly off-topic, but I keep seeing Fate/Stay night referenced on here, is it particularly 'rationalist' or do people just like it as entertainment?
It's not an especially rational piece of work as such, although it has its moments, but it is one of the more detailed examinations of heroic responsibility and the associated cultural expectations in fiction (if you can get past the sometimes shaky translation). Your mileage might vary, but I see echoes of it whenever Eliezer writes about saving the world.
It has some elements that stand out in terms of rationalist virtue, and many others which don't. I found it to be very much a mixed bag, but the things it did well, I thought it did exceptionally well.
It's not so much rationalist as... Eliezer-ish. See my review in the media thread: []

From Jacques Vallee, Messengers of Deception...

'Then he posed a question that, obvious as it seems, had not really occurred to me: “What makes you think that UFOs are a scientific problem?”

I replied with something to the effect that a problem was only scientific in the way it was approached, but he would have none of that, and he began lecturing me. First, he said, science had certain rules. For example, it has to assume that the phenomena it is observing is natural in origin rather than artificial and possibly biased. Now the UFO phenomenon could be controlled by alien beings. “If it is,” added the Major, “then the study of it doesn’t belong to science. It belongs to Intelligence.” Meaning counterespionage. And that, he pointed out, was his domain. *

“Now, in the field of counterespionage, the rules are completely different.” He drew a simple diagram in my notebook. “You are a scientist. In science there is no concept of the ‘price’ of information. Suppose I gave you 95 per cent of the data concerning a phenomenon. You’re happy because you know 95 per cent of the phenomenon. Not so in intelligence. If I get 95 per cent of the data, I know that this is the ‘cheap’ part of the inf... (read more)

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

  • “Silver Blaze” (Sir Arthur Conan Doyle)
If UFOs are controlled by a non-human intelligence, assuming they'll behave like human schemes is as pointless as assuming they'll behave like natural phenomena. But of course the premise is false and the Major's approach is correct.
A creature that can build a spaceship is probably closer to oe that can build a plane than it is to a rock at least, you have to start somewhere.

If Tetris has taught me anything it's that errors pile up and accomplishments disappear.


It's ridiculous to think that video games influence children. After all, if Pac-Man had affected children born in the eighties, we'd all be running around in dark rooms, eating strange pills, and listening to repetitive electronic music.

-- Paraphrase of joke by Marcus Brigstocke

To be fair there are quite a few people who nowadays listen to electronic music, take drugs that are pills and who spend a lot of time in dark rooms.

That's the joke.

It's funny, but you really shouldn't be learning life lessons from Tetris.

If Tetris has taught me anything, it's the history of the Soviet Union.

We can reformulate Tetris as follows: challenges keep appearing (at a fixed rate), and must be solved at the same rate; we cannot let too many unsolved challenges pile up, or we will be overwhelmed and lose the game.

So Tetris is really an anti-procrastination learning tool? Hmmm, wonder why that doesn't sound right….

But the challenge rate is not fixed. It increases at higher levels. So the lesson seems rather hollow: At some point, if you are successful at solving challenges, the rate at which new ones appear becomes too high for you.
Just like life. The reward for succeeding at a challenge is always a new, bigger challenge.
At which point you die, for lack of intelligence. Actually a fairly good metaphor for x-risk, surprisingly. Of course, it's a lot easier to make a Tetris-optimizer than a Friendly AI...
I thought Tetris had been proven to always eventually produce an unclearable block sequence.
Only if there is a possibility of a sufficiently large run of S and Z pieces. In many implementations there is not. []
It was either that or risk some people playing without stop until their bodies died in the real world.
...thus becoming useful object lessons to the rest of the species, and reducing our average susceptibility to reward systems with low variability. Not quite seeing the problem here.
And todays challenges can be used to remedy yesterdays failures.
How is that a rationality quote?
-8Eliezer Yudkowsky9y

If your parents made you practice the flute for 10,000 hours, and it wasn't your thing, you aren't an expert. You're a victim.

The most important skill involved in success is knowing how and when to switch to a game with better odds for you.

Scott Adams


"Quitters never win, winners never quit, but those who never win AND never quit are idiots"

From the same website [], another LessWrongian wisdom:
This is an incredibly important life skill.

Karl Popper used to begin his lecture course on the philosophy of science by asking the students simply to 'observe'. Then he would wait in silence for one of them to ask what they were supposed to observe. [...] So he would explain to them that scientific observation is impossible without pre-existing knowledge about what to look at, what to look for, how to look, and how to interpret what one sees. And he would explain that, therefore, theory has to come first. It has to be conjectured, not derived.

David Deutsch, The Beginning of Infinity

Did Karl Popper populate his class with particularly unimaginative students ? If someone asked me to "observe", I'd fill an entire notebook with observations in less than an hour -- and that's even without getting up from my chair.

And, while you were writing, someone would provide the wanted answer ;)

I'm pretty sure I had this very exercise in a creative-writing class somewhere in school.
That's an interesting prediction. Have you tried it? Can you predict what you'd do after filling the notebook? In my imagination, I'd probably wind up in one of two states: * Feeling tricked and asking myself "What was the point of that?" * Feeling accomplished and waiting for the next instruction.
I have never tried it myself in a structured setting, such as a classroom; but I do sometimes notice things, and then ask myself, "What is going on here ? Why does this thing behave in the way that it does ?". Sometimes I think about it for a while, figure out what sounds like a good answer, then go on with my day. Sometimes I shrug and forget about it. Sometimes -- very rarely -- I'm interested enough to launch a more thorough investigation. I imagine that if I set myself an actual goal to "observe" stuff, I'd notice a lot more stuff, and spend much more time on investigating it. You say that, in such a situation, you could end up "feeling tricked", but this assumes that the teacher who told you to "observe" is being dishonest: he's not interested in your observations, he's just interested in pushing his favorite philosophy onto you. This may or may not be the case with Karl Popper, but observations are valuable (and, IMO, fun) regardless.
Hmm, this point seems more Kuhnian than Popperian. Maybe Deutsch got the two confused.
Another view. []

But, Senjougahara, can I set a condition too? A condition, or, well, something like a promise. Don't ever pretend you can see something that you can't, or that you can't see something that you can. If our viewpoints are inconsistent, let's talk it over. Promise me.


In Bakemonogatari, the main characters often encounter spirits that only interact with specific people under specific conditions, although the effects they have are real (and would manifest to another's eyes as inexplicable paranormal phenomena). As such it's more a request about shoring up inconsistencies in sense perception, than it is about inconsistencies in belief.
That, and I'm getting the distinct impression their world is a non-euclidean mess.

Finding a good formulation for a problem is often most of the work of solving it... Problem formulation and problem solution are mutually-recursive processes.

David Chapman

5Eliezer Yudkowsky9y
See also: "Figuring out what should be your top priority" vs. "Actually working on your current best guess".

The opposite intellectual sin to wanting to derive everything from fundamental physics is holism which makes too much of the fact that everything is ultimately connected to everything else. Sure, but scientific progress is made by finding where the connections are weak enough to allow separate theories.

-- John McCarthy

"But think how small he is," said the Black Panther, who would have spoiled Mowgli if he had had his own way. "How can his little head carry all thy long talk?"

"Is there anything in the jungle too little to be killed? No. That is why I teach him these things, and that is why I hit him, very softly, when he forgets."

Rudyard Kipling, The Jungle Book

And anyone that’s been involved in philanthropy eventually comes to that point. When you try to help, you try to give things, you start to have the consequences. There’s an author Bob Lupton, who really nails it when he says that when he gave something the first time, there was gratitude; and when he gave something a second time to that same community, there was anticipation; the third time, there was expectation; the fourth time, there was entitlement; and the fifth time, there was dependency. That is what we’ve all experienced when we’ve wanted to do good. Something changes the more we just give hand-out after hand-out. Something that is designed to be a help actually causes harm.

Peter Greer

The other way to look at that is the other agent doing basic induction.
It is. That doesn't mean the results are good.

Life isn't about finding yourself. Life is about creating yourself.

George Bernard Shaw

I agree with the thought, but I find the attribution implausible. "Finding yourself" sounds like modern pop-psych, not a phrase that GBS would ever have written. Google doesn't turn up a source.

Google nGram [] suggests that "Finding yourself" wasn't a phrase that was really in use before the 1960 albeit there a short uptick in 1940. Given that you need some time for criticism and Shaw died in 1950, I think it's quite clear that this quote is to modern for him. Although maybe post-modern is a more fitting word? The timeframe seems to correspond with the rise of post-modern thought. If you suddenly start deconstructing everything you need to find yourself again ;)
I think you are right that it is difficult to find the exact source. I came upon this quotation in the book Up where the author quoted Bernard Shaw. Google gave me [], but no article or play was indicated as a source of this quote.
"Life is about creating yourself" still might be problematic because the emphasis is still on what sort of person you are.
As opposed to what? I would guess maybe a better concept is what you're able to get done...
I think the implied contrast is between "creating yourself" and "what you do" or the less pretty but more precise "doing your actions." The first implies a smaller, more rigid set than the last, which is perhaps not the correct way to perceive life.

The best solution to a problem is usually the easiest one.

-- GLaDOS from Portal 2

If you cast out all the easy strategies that don't actually work as non-'solutions', then sure, in what remains among the set of solutions, the best is often the easiest, though not easy. I can think of much harder ways to save the world and I'm not trying any of them.

If you define best as easiest.
If best is defined as easiest, then the "usually" within the quote is entirely superfluous. "If" statements are logically exception-less, and the Law of Conserved Conversation (That i've just made up) means that "usually" implies exceptions. Otherwise it would be excluded from the quote. So I say, pedantically, "duh. but you're missing the point a bit, aren't you mate?" I like to think of the principle as a kind of Occam's for action. Don't take elaborate actions to produce some solution that is otherwise trivially easy to produce.
You may want to read something about pragmatics, starting with e.g. the section on conversational implicatures in Chapter 1 [] of CGEL []. (Your made-up law sounds related to these [].)
Huh. The Maxim of Relation does sound very much like what I was trying to go for.
I see it as more of a "rather than sorting projects by revenue, make sure to sort them by profit," combined with "in cases where revenue is concave and cost linear, which happen frequently, the lowest cost project is probably going to be the highest profit."
That plus "beware inflated revenue estimates, especially for have-it-all type plans". Cost estimates are often much more accurate.
Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.

I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.

Stephen Jay Gould

I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.

A proactive interest in the latter would seem to lead to extensive instrumental interest in the former. Finding things (such as convolutions in brains or genes) that are indicative of potentially valuable talent is the kind of thing that helps make efficient use of it.

There are surprisingly few MRI machines or DNA sequencers in cotton fields and sweatshops. Paraphrasing the original quote from Stephen Jay Gould: The problem is not how good we are at detecting talent; it's where we even bother to look for it.

You need neither MRI machines nor DNA sequencers to detect intelligence. IQ test perform much better at detecting intelligence.

Yes; at this point with only 3 SNPs linked to intelligence, it's a joke to say that 'poor people aren't being sequenced and this is why we aren't detecting hidden gems'.

Yes, but that wasn't the point of my post; I was replying to: An MRI machine was an example of a device that could detect convolutions ins brains; a DNA sequencer was an example of a device that could detect genes. My point generalized to "it doesn't matter how good you are at testing for , if you don't apply the test." If we look at IQ tests instead, then (again) it doesn't matter how accurately a properly-administered IQ test detects intelligence, if you don't bother properly administering IQ tests to people in cotton fields, sweatshops, or other places where you don't feel like looking because they aren't "under the lamppost", as it were.
In a country like China there's quite a bit of testing in school. I think it's quite plausible that there are people who went through the Chinese school system working in Chinese sweatshops and cotton fields.
Is there IQ test properly designed and administered, or does the test-as-given have hidden correlations with things other than IQ?

I suspect, actually, that Gould would not view "find the geniuses and get them out of the fields" as a reasonable solution to the problem he poses. What he wants is for there to be no stoop labour in the first place, whether for geniuses or the terminally mediocre. The geniuses are just a way to illustrate the problem.

That's a hard problem, with no reasonable way to measure it in in a large population in sight, or even direction of the relationship taken into account. Ideally you'd take a bunch of kids and look at their brains and then see how they grew up and see whether you could find anything that altered the distribution in similar cases - but .... Well, you see the problem? It's a sort of twiddling your thumbs style studying, rather than addressing more immediate problems that might do something at a reasonable price/timeline.
There was only one Ramanujan; and we are all well-aware of Gould's views on intelligence here, I presume.
In what reference class?

I chose Ramanujan as my example because mathematics is extremely meritocratic, as proven by how he went from poor/middle-class Indian on the verge of starving to England on the strength of his correspondence & papers. If there really were countless such people, we would see many many examples of starving farmers banging out some impressive proofs and achieving levels of fame somewhat comparable to Einstein; hence the reference class of peasant-Einsteins must be very small since we see so few people using sheer brainpower to become famous like Ramanujan.

(Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s - or 4 standard deviations away, even in a population of billions we still would only expect a small handful of Ramanujans - consistent with the evidence. Gould, of course, being a Marxist who denies any intelligence, would not agree.)

from poor/middle-class Indian

It is worth pointing out that Ramanujan, while poor, was still a Brahmin.

And not just that, but he had more education than the poorest Indians, and probably more than the second poorest. And got his hands on a math textbook, which was probably pretty low probability.

My bet is that there aren't a lot of geniuses doing stoop labor, especially in traditional peasant situations, but there are some who would have been geniuses if they'd had enough food when young and some education.

Even the poorest Indians (or Chinese, for that matter) will sacrifice to put their children through school. Ramanujan's initial education does not seem to have been too extraordinary, before his gifts became manifest (he scored first in exams, and that was how he was able to go to a well-regarded high school; pg25). Actually, we know how he got his initial textbooks, which was in a way which emphasizes his poverty; pg26-27: So just as well he was being lent and awarded all his books, because certainly at age 11 as a poor Indian it's hard to see how he could afford expensive rare math or English books... A rather tautological comment: yes, if we removed all the factors preventing people from being X, then presumably more people would be X...
Is the distribution for mathematicians in general stochastic with respect to IQ and a wealthy upbringing / proximity to cultural centres that reward such learning? That might give you signs of whether wealth / culture is a third correlate. Otherwise, one way or the other, I'm not sure one person shifts the prob any appreciable distance.
It really depends on what 'prob' you're talking about. For example, the mean of some variable can be shifted an arbitrary amount by a single person if they are arbitrarily large, which is why "robust statistics []" shuns the mean in favor of things like the median, and of course a single counter-example disproves a universal claim. When you are talking about lists of geniuses where the relevant group of geniuses might be 10 or 20 people, 1 person may be fairly meaningful because the group is so small.
Being a Brahmin does not put rice on the table. Again, he was on the brink of starving, he says; this screens off any group considerations - we know he was very poor.
It screens off any wealth considerations, with the exception of his education (which is midlly relevant). It has a big impact on the question of average IQ and ancestry, though. Brahmin average IQ is probably north of 100,* and so a first-rank mathematician coming from a Brahmin family of any wealth level is not as surprising as a first-rank mathematician coming from a Dalit family. So we still need to explain the absence (as far as I know) of first rate Dalit mathematicians. Gould argues that they're there, and we're missing them; the hereditarian argues that they're not there. One way to distinguish between the two is to evaluate the counterfactual statement "if they were there, they wouldn't be missed," and while Ramanujan is evidence for that statement it's weakened because of the potential impact of caste prejudice / barriers. (It seems like the example of China might be better; it seems that young clever people have had the opportunity to escape sweatshops and cotton fields and enter the imperial service / university system for quite some time. Again, though, this is confounded by Han IQ being probably slightly north of 100, and so may not generalize beyond Northeast Asia and Europe.) *Unfortunately, there is very little solid research on Indian IQ by caste.
You'd need to examine the IQ of the poorer Brahmins, though, before you could say it's not surprising; otherwise if the poor Brahmins have the same IQs as equally poor Dalits, then it ought to be equally surprising. But Ramanujan is evidence against the Great Filters of nationality and poverty, which ought to be much bigger filters against possible Einsteins than caste. Yes, but I'm not very familiar with the background of major Chinese figures (eg. I just looked him up now and while I had assumed Confucius was a minor aristocrat, apparently he was actually the son of an army officer and "is said to have worked as a shepherd, cowherd, clerk, and a book-keeper." []); plus, you'd want to look at the post-Tang major Chinese figures, but that will exclude most major Chinese figures period like all the major philosophers - looking up the Chinese philosophy table in Murray's Human Accomplishment, like the first 10 are all pre-examination (and Murray comments of one of them, " it was Zhu Xi who was responsible for making Mencius as well known as he is today, by including Mencius’s work as part of “The Four Books” that became the central texts for both primary education and the civil service examinations").
He's literally as much evidence against those filters as he is evidence against hypothetical very low prevalence of poor innate geniuses.

I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality. I have to make these stories up because if you're poor and at all weird, flawed, or unlucky your story is rarely recorded.

Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation.

A gross exaggeration; execution was never in the cards for a poisoned apple which was never eaten.

Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy.

Likewise. Goedel didn't go crazy until long after he was famous, and so your conjugate is in no way showing 'privilege'.

Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality.

Likewise. You have some strange Whiggish conception of history where all periods were ones where gays would be lynched; Turing would not have been lynched anymore than President Buchanan would have, because so many upper-class Englishmen were notorious practicing gays and their boarding schools Sodoms and Gomorrahs. To remember the context of Turing's homosexuality conviction, this was in the same period where highly-placed gay Englishman after gay Englishman was turning out to be Soviet moles (see the Cambridge Five and how the bisex... (read more)

Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept. If you're poor, you don't just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition. Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there), that would itself imply a context containing more censoring factors for any potential become a mathematician, you have to, at minimum, be aware that higher math exists, that you're unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family. On your specific objections to my conjugates...I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position. Hardly a gross exaggeration. Goedel didn't become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper's government, which is part of what I was getting at; Ramanujan was more politic than your average mathematician. I actually was thinking of making Newton's conjugate be into Hindu mysticism instead of Christian but that seemed too elaborate.
I'm perfectly happy to accept the existence of oppression, but I see no need to make up ways in which the oppression might be even more awful than one had previously thought. Isn't it enough that peasants live shorter lives, are deprived of stuff, can be abused by the wealthy, etc? Why do we need to make up additional ways in which they might be opppressed? Gould comes off here as engaging in a horns effect: not only is oppression bad in the obvious concrete well-verified ways, it's the Worst Thing In The World and so it's also oppressing Einsteins! Not what Gould hyperbolically claimed. He didn't say that 'at the margin, there may be someone who was slightly better than your average mathematician but who failed to get tenure thanks to some lingering disadvantages from his childhood'. He claimed that there were outright historic geniuses laboring in the fields. I regard this as completely ludicrous due both to the effects of poverty & oppression on means & tails and due to the pretty effective meritocratic mechanisms in even a backwater like India. It absolutely is. Don't confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean - with a population of ~1.3 billion people, that's just proving the point. The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand. Really? Then I'm sure you could name three examples. Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity. Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?
Wait, what are you saying here? That there aren't any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we're arguing about whether there should be a program to give a battery of genius tests to every poor adult in India. Not all of them, I don't think. And then you have to have a talent that manifests early, have someone in your community who knows that a kid with a talent for arithmetic might have a talent for higher math, knows that a talent for higher math can lead to a way to support your family, expects that you'll be given a chance to prove yourself, gives a shit, has a way of getting you tested... Just going off Google, here: People being incarcerated for unsuccessful attempts to poison someone: [] [] [] Person being killed for suspected unsuccessful attempt to poison someone: [] I was trying to elegantly combine the Incident with the Debilitating Paranoia and the Incident with the Telling The Citizenship Judge That Nazis Could Easily Take Over The United States. Clearly didn't completely come across. He was politic enough to overcome Vast Cultural Differences enough to get somewhat integrated into an insular community. I hang out with mathematicians a lot; my stereotype of them is that they tend not to be good at that.
And this part seems entirely plausible. American slaves had no opportunity to become famous mathematicians unless they escaped, or chanced to have an implausibly benevolent Dumbledore of an owner. Gould makes a much stronger claim, and I attach little probability to the part about the present day. But even there, you're ignoring one or two good points about the actions of famous mathematicians. Demanding citations for 'trying to kill people can ruin your life' seems frankly bizarre.
The specific oppressions you led off with: yes. I thought we were talking about Oppenheimer and Cambridge? It looks like if Oppenheimer hadn't had rich parents who lobbied on his behalf, he might have gotten probation instead of not. Given his instability, that might have pushed him into a self-destructive spiral, or maybe he just would have progressed a little slower through the system. So, yes, jumping from "the university is unhappy" to "the state hangs you" is a gross exaggeration. (Universities are used to graduate students being under a ton of stress, and so do cut them slack; the response to Oppenheimer of "we think you need to go on vacation, for everyone's safety" was 'normal'.)
"Oppenheimer wasn't privileged, he was only treated slightly better than the average Cambridge student."I'm sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.
I don't see the relevance, because to me "Einsteins in sweatshops" means "Einsteins that don't make it to ", for some Cambridge equivalent. If Ramanujan had died three years earlier, and thus not completed his PhD, he would still be in the history books. I mean, take Galois as an example: repeatedly imprisoned for political radicalism under a monarchy, and dies in a duel at age 20. Certainly someone ruined by circumstances--and yet we still know about him and his mathematical work. In general, these counterfactuals are useful for exhibiting your theory but not proving your theory. Either we have the same background assumptions- and so the counterfactuals look reasonable to both of us- or we disagree on background assumptions, and the counterfactual is only weakly useful at identifying where the disagreement is.
I don't think Epicurus was a slave. He did admit slaves to his school though, which is not something that was typical for his time. Perhaps you are referring to the Stoic, Epictetus [], who definitely was a slave (although, white-collar).
Whups, you're right. Some of the Greek philosophers' names are so easy to confuse (I still confuse Xenophanes and Xenophon). Well, Epictetus was still important, if not as important as Epicurus.
I think a better term might be 'meritocratic', and not 'democratic'. Unless mathematicians vote on mathematics?
Well, it is also democratic in the sense that what convinces the mathematical community is what matters, and there's no 'President of Mathematics' or 'Academie de la Mathematique' laying down the rules, but yes, 'meritocratic' is closer to what I meant.
Well, “democratic” strongly suggests a majority vote, and it's not like something that convinces 54% of the mathematicians who read it ‘wins’.
pg169-171, Kanigel's 1991 The Man Who Knew Infinity: Personally, having finished reading the book, I think Kanigel is wrong to think there is so much contingency here. He paints a vivid picture of why Ramanujan had failed out of school, lost his scholarships, and had difficulties publishing, and why two Cambridge mathematicians might mostly ignore his letter: Ramanujan's stubborn refusal to study non-mathematical topics and refusal to provide reasonably rigorous proofs. His life could have been much easier if he had been less eccentric and prideful. That despite all his self-inflicted problems he was brought to Cambridge anyway is a testimony to how talent will out.
I haven't heard that before. Do you have a source?

From his letter to G.H. Hardy:

I am already a half starving man. To preserve my brains I want food and this is my first consideration. Any sympathetic letter from you will be helpful to me here to get a scholarship either from the university or from the government.

Googling the text finds it quoted a bunch of places.

Wow, thanks!

Besides his letter to Hardy, Wikipedia cites The Man Who Knew Infinity (on Libgen; it also quotes the 'half starving' passage), where the cited section reads:

Describing the obsession with college degrees among ambitious young Indians around this time, an English writer, Herbert Compton, noted how "the loaves and fishes fall far short of the multitude, and the result is the creation of armies of hungry 'hopefuls'-the name is a literal trans- lation of the vernacular generic term omedwar used in describing them- who pass their lives in absolute idleness, waiting on the skirts of chance, or gravitate to courses entirely opposed to those which education in- tended." Ramanujan, it might have seemed in 1908, was just such an omedwar. Out of school, without a job, he hung around the house in Kumbakonam.

Times were hard. One day back at Pachaiyappa's, the wind had blown off Ramanujan's cap as he boarded the electric train for school, and Ramanujan's Sanskrit teacher, who insisted that boys wear their traditional tufts covered, asked him to step back out to the market and buy one. Ramanujan apologized that he lacked even the few annas it cost. (His classmates, who'd observed his

... (read more)
I can't parse '271" feet', is this an OCR issue? If you loosen the belt by two yards, it can obviously reach at least a yard above the surface, because you can just go from ____ to __|__. And I recall that the actual answer is considerably more than that.
Given that the symbol " is the symbol for inches, and ' is the symbol for feet, I would suspect that there has been a mistyping in the quote. I think that what was meant to be there was 72" or 72.1" (inches), which is exactly/one-tenth of an inch over two yards (one yard = three feet). That would produce the desired result of a nearly one-foot increase in the radius of the belt; adding 72 inches to the circumference of the belt would produce an increase of 11.46 inches (72 inches / (2 * pi)) in the radius of the belt, which in this case is the height above the ground.
4Eliezer Yudkowsky9y
Was extremely democratic. Do we know this is still true?

"The Collapse of the Soviet Union and the Productivity of American Mathematicians" comes to mind as an interesting recent natural experiment where the floodgate of Russian mathematical talent was unleashed after the collapse of the USSR and many of them successfully rose in America despite academic math being a zero-sum game; consistent with meritocracy.

At the outlier level, I think so -- see e.g. Perelman []. At the normal professor-of-mathematics level, probably not.
Okay, maybe there aren't other examples quite as good as him, but a few of these people [] surely come close. Yes, but I'm not sure all of the populations working in cotton fields and sweatshops had such a low average IQ. (And Gould just said “people”, not “innumerable people” or something like that.)
Most of those people either seem to come from middle-class or better backgrounds, fall well below Einstein, or both (I mean, Eliezer Yudkowsky?)
Doesn't your observation that most successful autodidacts come from financially stable backgrounds SUPPORT the hypothesis that intelligent individuals from low-income backgrounds are prevented from becoming successful? With the facts you've highlighted, two conclusions may be drawn: either most poor people are stupid, or the aforementioned "starving farmers" don't have the time or the resources to educate themselves or "[bang] out some impressive proofs," on account of the whole "I'm starving and need to grow some food" thing. I don't see how such people would be able to afford books to learn from or time to spend reading them.
No, it doesn't; see my other comment. I was criticizing the list as a bizarre selection which did not include anyone remotely like Einstein. How did Ramanujan afford books? The answer to the autodidact point is to point out that once one has proven one's Einstein-level talent, one is integrated into the meritocratic system and no longer considered an autodidact.
Did you mean innumerate people?
I meant ‘lots of people’, not ‘people who cannot do arithmetic’. looks word up EDIT: Huh, looks like that was the right word after all.
Sorry, then. Your phrasing sounded wrong to me, but I was wrong.
Will you update your post after looking the word up confirms that it means what you thought it did?
I was going to but I forgot to. Thank you.
Isn't the average IQ 100 by definition?
Yes - but whose average?
Presumably the people who write the IQ test, based on whatever population sample they use to calibrate it. Is the point that the average IQ in India is 70-80, as opposed to the average in the US? (This could be technically true on an IQ test written in the US, without being meaningful, or it could be actually true because of nutrition or whatever). What data does the number 70-80 actually come from?
Presumably from this list [\_and\_the\_Wealth\_of\_Nations#National_IQ_estimates] .
It would naively seem that an IQ of 160 or more is 5 SDs from 85 , but 4SDs from the 100 , so the rarity would be 1/3,483,046 vs 1/31,560 , for a huge ratio of 110 times prevalence of extreme genius between the populations. Except that this is not how it works when the IQ of 100 population has been selected from the other and subsequently has lower variance. Nor is it how Flynn effect worked. Because, of course, the standard deviation is not going to remain constant.
You presume too much, the only thing I remember about Gould's views is that they are controversial.

Old man: Gotcha! So you do collect answers after all!

Eye: But of course! Everybody does! You need answers to base decisions on. Decisions that lead to actions. We wouldn't do much of anything, if we were always indecisive!

All I am saying is that I see no point in treasuring them! That's all!

Once you see that an answer is not serving its question properly anymore, it should be tossed away. It's just their natural life cycle. They usually kick and scream, raising one hell of a ruckus when we ask them to leave. Especially when they have been with us for a long time.

You see, too many actions have been based on those answers. Too much work and energy invested in them. They feel so important, so full of themselves. They will answer to no one. Not even to their initial question!

What's the point if a wrong answer will stop you from returning to the right question. Although sometimes people have no questions to return to... which is usually why they defend them, with such strong conviction.

That's exactly why I am extra cautious with all these big ol' answers that have been lying around, long before we came along. They bully their way into our collection without being invited by any questio

... (read more)
This is good, although when I read the comic I find myself interpreting Eye as valuing curiosity for curiosity's sake alone,in direct opposition to valuing truth, which I can't really get behind and leads to me siding with the old man.

So, in a business setting, you’ve got to provide value to your customers so that they pay for the goods and services that you’re providing. Philanthropy is unfortunate in that the people that your customer base is made of oftentimes are the people that are writing the checks to support you. The people that are writing the donation checks are what keep organizations in business oftentimes. The people that are receiving the services, then, are oftentimes not paying for the services, and therefore their voice is not heard. And so within the nonprofit space, we’ve created a system where he/she who tells the best story is the one that’s rewarded. There’s an incentive to push down the stories that are not of positive impact. There’s the incentive to pretend that there are no negative things that happen, there’s the incentive to make sure that our failures are never made public, and there’s the disconnected between who’s paying for the service and who’s receiving the services. When you disconnect those two aspects, you do not have accountability that acts in the best interest of the people who are receiving what we are all trying to do, which is just to help in places of great need.

Peter Greer

Rewarding those who tell great stories is hardly limited to non-profits. Hollywood of course does this as well it should. Fund raising for new ventures does this a lot, raising money for many sorts of investment at the retail level is largely an effort of telling good stories not particularly supported by statistical fact. Which isn't to say that this is not a problem for non-profits, but rather that non-profits might do well to see how other industries deal with this phenomenon.
At least in investing the people listening to the stories eventually find out whether their investment went sour.
The problem is doubtless exacerbated when those paying for the service and those receiving it live in different time periods.

The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.

-- Tillaume, The Alloy of Law

— Robert Fripp

But, unlike other species, we also know how not to know. We employ this unique ability to suppress our knowledge not just of mortality, but of everything we find uncomfortable, until our survival strategy becomes a threat to our survival.

[...] There is no virtue in sustaining a set of beliefs, regardless of the evidence. There is no virtue in either following other people unquestioningly or in cultivating a loyal and unquestioning band of followers.

While you can be definitively wrong, you cannot be definitely right. The best anyone can do is constantly to review the evidence and to keep improving and updating their knowledge. Journalism which attempts this is worth reading. Journalism which does not is a waste of time."

Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes' Theorem. Note: This means that you cannot be definitively wrong, not that you can be definitively right.
True, but possibly dangerously close to "There is no virtue in following other people or in cultivating followers []".

To the layman, the philosopher, or the classical physicist, a statement of the form "this particle doesn't have a well-defined position" (or momentum, or x-component of spin angular momentum, or whatever) sounds vague, incompetent, or (worst of all) profound. It is none of these. But its precise meaning is, I think, almost impossible to convey to anyone who has not studied quantum mechanics in some depth.

I haven't studied quantum mechanics in any depth at all. The meaning I, as a layman, derive from this statement is: in the formal QM system a particle has no property labelled "position". There is perhaps an emergent property called position, but it is not fundamental and is not always well defined, just like there are no ice-cream atoms. Is this wrong?

Yes, it's wrong. In the QM formalism position is a fundamental property. However, the way physical properties work is very different from classical mechanics (CM). In CM, a property is basically a function that maps physical states to real numbers. So the x-component of momentum, for instance, is a function that takes a state as input and spits out a number as output, and that number is the value of the property for that state. Same state, same number, always. This is what it means for a property to have a well-defined value for every state.

In QM, physical properties are more complicated -- they're linear operators, if you want a mathematically exact treatment. But here's an attempt at an intuitive explanation: There are some special quantum states (called eigenstates) for which physical properties behave pretty much like they do in CM. If the particle is in one of those states, then the property takes the state as input and basically just spits out a number. Whenever the particle is in that state, you get the same number. For those states, the property does have a well-defined value.

But the problem in QM is that those are not the only states there are. There are other states as w... (read more)

Thanks for the detailed explanation! Now I have more fun words to remember without actually understanding :-) Seriously, thanks for taking the time to explain that.

Why spend a dollar on a bookmark? ... Why not use the dollar as a bookmark?

-Steven Spielberg

Dollars are floppy. It's nice to have a relatively rigid bookmark. I've used tissues and such as bookmarks in the past but they're unsatisfactory. Of course, that was back when I still read books in dead tree format.

[-][anonymous]9y 19

I'm reminded of a picture I saw on Facebook of a doorstop still in its original packaging used as a doorstop.

My bookmark is prettier than the dollar.

But when it's being used, you don't see it!

My bookmark is made of two prices of fridge-magnet material. It can be closed around a few pages and the magnetism holds it in place, preventing it from falling out.

Plus dollars in my country are exclusively coins, the smallest note is $5.

exposure to objects common to the domain of business (e.g., boardroom tables and briefcases) increased the cognitive accessibility of the construct of competition (Study 1), the likelihood that an ambiguous social interaction would be perceived as less cooperative (Study 2), and the amount of money that participants proposed to retain for themselves in the “Ultimatum Game” (Studies 3 and 4).

-Abstract, Material priming: The influence of mundane physical objects on situational construal and competitive behavioral choice (via Yvain)

It will fall out []. Apart from that, money isn't particularly clean [] and (especially if considering US currency) not particularly pretty either. I expect people to find a bookmark far more aesthetically pleasing than a note. How is this a rationality quote? It is rationality-neutral at best.
"Because the dollar is dirty" is one of those pained, stretched explanations people come up with to explain why they do what they do, not the actual reason (even in some small part) the bookmark was invented and became popular.
The question wasn't "Why was the bookmark invented?". If it was, I might have, for example, tried to determine the first time someone used a bookmark (or when it became popular). Then I could have told you precisely how many dollars in present value that dollar would have been worth. That is, moving the goalposts in this way has made your quote worse, not better. Not even is some small part? That's absurd. Can you not empathise in even a small part with the aesthetic aversion many people have to contaminating things with used currency?
Are you sure you didn't just go ahead and basically make up these people who don't want money to touch their book because it's dirty?
No. I've seen such people. When I look at the mirror, for example. Notice that the standard was explicitly set to: The observation that this kind of absurd claim is positively received and even supported by similarly ridiculous petty sniping is disheartening.
I've known at least a couple people who found it yucky to handle cash right before a meal for that same reason.
-1Said Achmiz9y
I definitely wash my hands after handling money and before eating.
The answer may very well be, "because I find this bookmark that I bought at a dollar store a lot more aesthetically pleasing than the raw dollar bill". You may as well ask, "Why spend $20 on a book ? Why not just save the $20 ?"
I get all kinds of entertainment out of reading a $20 bill.
I do neither. I use any piece of sufficiently stiff paper I happen to have around (bookmarks purchased by someone else, playing cards, used train tickets, whatever).
I tear out a blank page from the nearest notebook of sufficient size, and fold it as necessary.
Or just fold the corner of the page over.

While I respect your right to do so, I find such a concept aesthetically horrifying.

I never understood that... I remember when I was in elementary school there was a sign in the library that said something like "Don't dog-ear your books... you wouldn't like it if someone folded your ear over, so don't do it to your book." What?
That's not particularly uncomfortable.

You're suffering from the typical ear fallacy. Some people have much stiffer cartilage, or something; I don't find it uncomfortable, but I've met people who're caused actual pain by it.

With library books, I think the concern is more about wear-and-tear on shared property. Some of us leakily generalize this to "folding page corners is bad", even for non-shared books. When it's your own book, you can do whatever you want. Personally I find folded page corners less effective than bookmarks for quickly finding my place, especially if I've folded many other page corners, which makes the currently-folded one less visually obvious. But perhaps I'd learn to be better at that if I used it regularly.
It's a permanent mark that easily leads to tearing.
I made one when I was bored, long ago when my grandmother still ran her store and my uncle still ran his immigration law firm on the third floor, and when I was obsessed with knot theory, out of computer paper, tape, and a lot of hard pencil. I still use it, and it cost me next to nothing. EDIT: If requested (however unlikely) I will happily deliver a picture, and either a push or a bouillon cube (your choice). EDIT THE SECOND: it was requested! []
Yes please! :-)
Done! Do you want a bouillon cube or a push? Think wisely.
What kind of push []?
This kind! []
I feel like I want the last few minutes of my life back.
That leaves a permanent crease, which I dislike. (Likewise, I prefer to use pencils -- preferably soft pencils -- rather than pens to take notes.)
It would seem that most of the responders are hopelessly literal....
I find it hard to come up with a deeper meaning for the original statement, so yeah. Besides, it's not hard to come up with a deeper meaning behind what the responders are saying; in pointing out that an object specifically designed as a bookmark makes a better bookmark than a dollar bill, they're making a statement about more than just dollar bills and bookmarks, but about specialization in general.
"We don't automatically reflect on most things we do, even when spending money. Even lifelong practices can be shown as absurd with a moment's consideration from the right angle. In fact, we're so irrational that we'll pay a dollar for a bookmark!"

A decision with an aesthetic benefit is not irrational. You are misusing "irrational".

(Or was this sarcasm?)

Reworded so people don't get caught up in that particular phrasing. (Also, please read the comment tree and note that I'm just trying to answer Jiro's implied question.)
I don't see why everyone is disagreeing with you. I definitely notice that people have a tendency to buy things labeled for some sort of purpose, where if they thought for a few minutes they could find a way to fulfill that same purpose without spending money. Unfortunately, I can't think of any examples off the top of my head.
That's clearly the intent - except maybe for that last bit - but it's kinda a poor example, I have to admit.
While I agree that people often make decisions without thinking them out, I think you are underestimating aesthetics. Aesthetics have phychological effects, and often people find better design structure estetically pleasing.
Reworded so people don't get caught up in that particular phrasing. (Also, please read the comment tree and note that I'm just trying to answer Jiro's implied question.)
Your quote is both literally and connotatively poor. If Spielberg had asked "Why spend two dollars on a bookmark? ... Why not use a dollar as a bookmark?" then there would at least have been some moral along the lines of efficient practicality. Even then it would be borderline.
A dollar is much more fungible than a bookmark. After you're done reading your book, you can not only use the dollar to hold your place in other books, you can spend it on other things.
It is indeed a considerably more fungible one dollar.
It takes time and effort (admittedly not much of it, but usually even little of it makes a difference psychologically []) to spend $1 on a bookmark. (I would have phrased it as “Why bother spending ...”.)
Why use a bookmark that's worth a whole dollar? I use scrap paper, or a sticky note if falling out is a risk (it almost always isn't.)

I just think it's good to be confident. If I'm not on my team why should anybody else be?

-Robert Downey Jr.

I think it's good to be well-calibrated.

I think it's good to be well-calibrated.

It is usually best to be socially confident while making well-calibrated predictions of success. The two are only slightly related and Downey is definitely talking about the social kind of confidence.

Good point. I'm still not sure I like his framing of social interactions as getting people on "your" team (which I may be partly biased in by the source of the quote), but the objection in my initial post isn't a good one.
I think it's best to be well-calibrated, use that to choose your team as one that's going to succeed, and then to be confident.
Maybe I'm misunderstanding the quote, but this seems to wither if you have something to protect. If I'm having surgery, I don't really want the team of expert surgeons listening to my suggestions. I shouldn't be on my team because I'm not qualified. Highly qualified people should be so that my team will win (and I get to live).

Well, I think the thrust of the quote had more to do with being confident in your own projects. But I'll try to do an answer to your point because I think it's important to recognise the limitations of domain specialists - some of whom just aren't very good at their jobs.

If you're not on your team of expert surgeons, you're gonna be screwed if they're not actually as expert as you might think they were. There's a bit in What Do You Care What Other People Think? Where Feynman is talking about his first wife's hospitalisation - and how he had done some reading around the area and come up with the idea that it might be TB - and didn't push for the idea because he thought that the doctors knew what they were doing.

Then, sometime later, the bump began to change. It got bigger—or maybe it was smaller—and she got a fever. The fever got worse, so the family doctor decided Arlene should go to the hospital. She was told she had typhoid fever. Right away, as I still do today, I looked up the disease in medical books and read all about it. When I went to see Arlene in the hospital, she was in quarantine—we had to put on special gowns when we entered her room, and so on. The doctor was there,

... (read more)
Expert surgeons tend to think that more problems should be solved via surgery than doctors who aren't surgeons. Before getting surgery you should always talk with a doctor who knows something about the kind of illness you are having who isn't a surgeon. After the operation is done doctors will ask you if everything is alright with you. If you try to understand what the operation involved you will give your doctor answers that are likely to be more informative than if you just try to place all responsibility onto another person. Especially if you feel something that's not normal for the type of operation that you get, it important to be confident that you perceive something that's worth bringing to the attention of your doctor. Having had big operations (one with 8 weeks of hospitalisation and one with 3 weeks) myself I think not taking enough for myself in those context was one of the worst decisions I made in my life. But then I was young and stupid about how the world works at the time.
Only if you're not the one with the responsibility to do something to protect it. I don't know the context of the quote, other than apparently being from an interview (with the actor, not any character he has played), but I read it as being about your own efforts to accomplish something. In such matters, you are the first person on your team, and you won't get any others on board by telling them you're not sure this is a good idea. Once you've made the decision that you are going to go for it, you have to then go for it, not sit around wondering if it's the right decision. If you're not acting on a decision, you didn't make it.
That may be a better wording of what I was trying to say here [].
This works as a rationalization growing from the conclusion that others should be "on your team". If on well-calibrated assessment you yourself are not "on your team", others probably shouldn't be either, in which case projecting confidence amounts to deceit.
(Unless I don't understand what you are saying) I reject whatever definition 'deceit' is given such that the above claim is true. Behaving in a socially confident manner is different in nature to lying.
I was using "confidence" in a more specific sense, as in "overconfidence", that is implying that you know what you are doing, in the case where you actually don't. "Socially confident manner" might in contrast (for example, among many other things) involve willingness to state your state of uncertainty, as opposed to hiding it (including behind overconfidence).
This seems reasonable. Misleading about probabilities is deceptive. To be fair on Robert Downey, it doesn't seem likely that that is the the usage he was making in the quote.
Jehovah's Witnesses (or insert your cult of choice) who secretly don't believe in what they're selling, army recruiters who have secretly come to know and reject the horrors of war, insurance salesmen who sell useless policies: All these (and many others) can be deceitful even without telling you their respective lies explicitly, just by using their social capital / community standing / aura of authority to signal their allegiance to their tribe, lending it credence in a deceitful (dishonest because not in tune with their well-calibrated assessment) manner. The similarity to lying comes from social cues (such as exuding confidence in one's role) and 'explicit' lies being forms of communication both.
It is possible to deceive others while using social confidence signals. Such signals are instrumentally useful for even vital for this and many other purposes. But this is not the same thing as the confidence being deceitful.
A somewhat similar sentiment: []
Why shouldn't they be? The idea that if you don't rate yourself highly no one should is just an excuse for shitty instincts. Obviously it's a useful piece of nonsense to tell yourself. People are more likely to come to your side if you are confident. But the explicit reasoning is reprehensible. (not that any explicit reasoning probably went in, it's such a common idea that it is repeated without thought. It's almost a universal applause light.) This is more of an irrationality quote. A bit of of paper thin justification for a shitty but common sentiment which it's useful to adopt rather than notice.

He who knows nothing is closer to the truth than he whose mind is filled with falsehoods and errors.

-Thomas Jefferson

One who possesses a maximum-entropy prior is further from the truth than one who possesses an inductive prior riddled with many specific falsehoods and errors. Or more to the point, someone who endorses knowing nothing as a desirable state for fear of accepting falsehoods is further from the truth than somebody who believes many things, some of them false, but tries to pay attention and go on learning.

How about "If you know nothing and are willing to learn, you're closer to the truth than someone who's attached to falsehoods"? Even then, I suppose you'd need to throw in something about the speed of learning.
It would seem that the difference of opinion here originates in the definition of further. Someone who knows nothing is further (in the information-theoretic sense) from the truth than someone who believes a falsehood, assuming that the falsehood has at least some basis in reality (even if only an accidental relation), because they must flip more bits of their belief (or lack thereof) to arrive at something resembling truth. On the other hand, in the limited, human, psychological sense, they are closer, because they have no attachments to relinquish, and they will not object to having their state of ignorance lifted from them, as one who believes in falsehoods might object to having their state of delusion destroyed.
Right, I'd take it as a statement on how humans actually think, not how a perfect rationalist thinks. Or maybe how most humans think since humans can be unattached to their beliefs.
To me "filled with falsehoods and errors" translates into more falsehoods than "some". Though I agree its not a very good quote within the context of LW.
-LessWrong Community
Maybe it's just where my mind was when I read it but I interpreted the quote as meaning something more like: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence."
In what units does one measure distance from the truth, and in what manner?
Bits of Shannon entropy [].
That's half of the answer. In what manner does one measure the number of bits of Shannon entropy that a person has?
If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X). The lower the magnitude of the resulting negative real, the better you faired.
That allows a prediction/confidence/belief to be measured. How do you total a person?
Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle. It's impossible in practise, but only like, four line formal definition.
How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth? Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.
I am not very certain that humans actually can have an internal belief model that isn't isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.
How do you account for people falling prey to things like the conjunction fallacy?
I don't think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It's when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it). On top of that, the meaning of the word "probable" in everyday context is somewhat different - a proper study should ask people to actually make bets. All around it's not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions. edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about "how many people out of 100", nobody gave a wrong answer. Which immediately implies that the understanding of "probable" has been an issue, or some other cause, but not some general failure to apply conjunctions.
There have been studies that asked people to make bets. Here's an example []. It makes no difference -- subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.
I've just read the example beyond it's abstract. Typical psychology: the actual finding was that there were fewer errors with the bet (even though the expected winning was very tiny, and the sample sizes were small so the difference was only marginally significant), and also approximately half of the questions were answered correctly, and the high prevalence of "conjunction fallacy" was attained by considering at least one error over many questions.
How is it a "robust phenomenon" if it is negated by using strings of larger length difference in the head-tail example or by asking people to answer in the N out of 100 format? I am thinking that people have to learn reasoning to answer questions correctly, including questions about probability, for which the feedback they receive from the world is fairly noisy. And consequently they learn that fairly badly, or mislearn it all-together due to how more detailed accounts are more frequently the correct ones in their "training dataset" (which consists of detailed correct accounts of actual facts and fuzzy speculations). edit: Let's say, the notion that people are just generally not accounting for conjunction is sort of like Newtonian mechanics. In a hard science - physics - Newtonian mechanics was done for as a fundamental account of reality once conditions were found where it did not work. Didn't matter any how "robust" it was. In a soft science - psychology - an approximate notion persists in spite of this, as if it should be decided by some sort of game of tug between experiments in favour and against that notion. If we were doing physics like this, we would never have moved beyond Newtonian mechanics.
Framing the problem in terms of frequencies mitigates a number of probabilistic fallacies, not just the conjunction fallacy. It also mitigates, for instance, base rate neglect. So whatever explanation you have for the difference between the probability and frequency framings shouldn't rely on peculiarities of the conjunction fallacy case. A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion. The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic. One does not need to suppose that there is a misunderstanding about the word "probability" involved (if there is, how do you account for the betting experiments?). The difference in the frequency framing is not that it makes it clear what the experimenter means by "probability", it's that the ease of algorithmic reasoning in that case reduces reliance on the representativeness heuristic. Further evidence for this is that the fallacy is also mitigated if the question is framed in terms of single-case probabilities, but with a diagram clarifying the relationship between properties in the problem. If the effect were merely due to a misunderstanding about what is meant by "probability", why would there be a mitigation of the fallacy in this case? Does the diagram somehow make it clear what the experimenter means by "probability"? In response to your Newtonian physics example, it's simply not true that scientists abandoned Newtonian mechanics as soon as they found conditions under which it appeared not to work. Rather, they tried to find alternative explanations that preserved Newtonian mechanics, such as positing the existence of Uranus to account for discrepancies in planetary orbits. It was only once there was a better theory available that Newtonian mechanics was abandoned. Is th
There's only room for making it easier when the word "probable" is not synonymous with "larger N out of 100". So I maintain that alternate understanding of the word "probable" (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where "blerg" is always, universally, invariably, a shorthand for "N out of 100". In such context, asking about "N out of 100" or about "blerg" should produce nearly identical results. Also, in your study, about half of the questions were answered correctly. I guess that's fair enough, albeit its not clear how that works on Linda-like examples. In my opinion its just that through their life people are exposed to a training dataset which consists of 1. Detailed accounts of real events. 2. Speculative guesses. and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too. The point is that you can't pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.
Maybe a misunderstanding about the word is relevant, but it clearly isn't entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject's judgment about what the experimenter means by "probable". The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy. Why so? If the word "probable" is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context. Then the representativeness works in the opposite direction from what's commonly assumed of the dice example. Speaking of which, "is" is sometimes used to describe traits for identification purposes, e.g. "in general, an alligator is shorter and less aggressive than a crocodile" is more correct than "in general, an alligator is shorter than a crocodile". If you were to compile traits for finding Linda, you'd pick the most descriptive answer. People know they need to do something with what they are told, they don't necessarily understand correctly what they need to do.
Poor brain design. Honestly, I could do way better if you gave me a millenium.
You know, at some point, whoever's still alive when that becomes not-a-joke needs to actually test this. Because I'm just curious what a human-designed human would look like.
How likely do you believe it is that there exists a human who is absolutely certain of something?
Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain? It's not unheard of people to bet their life on some belief of theirs.
That doesn't show that they're absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying. The real issue with this claim is that people don't actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you're considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside. In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it's impossible for their god to contradict himself, or something like that). Further, I would exclude methods of changing someone's mind without using evidence (surgery or cosmic rays). I can't quite put it into words, but it seems like the fact that it isn't evidence and instead changes probabilities directly means that it doesn't so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality. Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context [] , where others are considering their beliefs. ETA: An example: While I haven't actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can't be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist. Yeah, fair enough.
You are correct. I am making my statements on the basis that probability is in the mind [], and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as "absolutely certain" seems to refer to a mental state, I would give it the definition of the latter.
(I suspect that we don't actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it's my standard response to thought experiments based on people's ability to imagine things.) I'm not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I'm claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; [] while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don't have explicit access to and aren't even stored as probabilities (?), over onto explicitly stated probabilities. Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people's statement that they were absolutely, 100% confident of their belief was incorrect.
I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur. However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category - even if they are not intentionally lying, on some level they are not absolutely certain. I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.
First, fundamentalism is a matter of theology, not of intensity of faith. Second, what would these people do if their God appeared before them and flat out told them they're wrong? :-D
Fixed, thanks. Their verbal response would be that this would be impossible. (I agree that such a situation would likely lead to them actually changing their beliefs.)
At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what's impossible and what's not. Oh, and step back, exploding heads can be messy :-)
This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.
Of course. Stuffing fingers into your ears and going NA-NA-NA-NA-CAN'T-HEAR-YOU is a rather common debate tactic :-)
Don't you observe people doing that to reality, rather than updating their beliefs?
That too. Though reality, of course, has ways of making sure its point of view prevails :-)
Reality has shown itself to be fairly ineffective in the short term (all of human history).
8-0 In my experience reality is very very effective. In the long term AND in the short term.
Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
You are confused. I am not saying that false claims about reality cannot persist -- I am saying that reality always wins. When you die you don't actually go to heaven -- that's Reality 1, Religion 0. Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.
Wait, isn't that pretty much tautological, given the definition of 'reality'?
What's your definition of reality?
I can't get a very general definition while still being useful, but reality is what determines if a belief is true or false. I thought you were saying that reality has a pattern of convincing people of true beliefs, not that reality is indifferent to belief.
You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That's why reality always wins.
Most of my definition of 'true consequences' matches my definition of 'reality'.
Sort of. Particularly in the case of belief in an afterlife, there isn't a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different - or have a different meaning - from what they really are.
In those cases reality can take more drastic [] measures. Edit: Here [] is the quote I should have linked to.
Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one's own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.
See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.
What's the goal of rationalism as a movement?
No idea. I don't even think rationalism is a movement (in the usual sociological meaning). Ask some of the founders.
The founders don't get to decide whether or not it is a movement, or what goal it does or doesn't have. It turns out that many founders in this case are also influential agents, but the influential agents I've talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).
This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig's commentary [].
First you mean Christian theology, there are lot more theologies around. Second, I don't know what is "mainstream" theology -- is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians? Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone's satisfaction and no resolution is expected. Fourth, William Lane Craig basically evades the problem by defining good as "what God is". God can still do anything He wants and whatever He does automatically gets defined as "good".
Clearly they would consider this entity a false God/Satan.
This is starting to veer into free-will territory, but I don't think God would have much problem convincing these people that He is the Real Deal. Wouldn't be much of a god otherwise :-)
That's vacuously true, of course. Which makes you original question meaningless as stated.
It wasn't so much meaningless as it was rhetorical.
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I'm a simulation or I've gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it's just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn't exist? I admit that I'm not certain about that (heh!), which is part of the reason I'm curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
So you're effectively saying that your prior is zero and will not be budged by ANY evidence. Hmm... smells of heresy to me... :-D
I would argue that this definition of absolute certainty is completely useless as nothing could possibly satisfy it. It results in an empty set. If you "cannot imagine under any circumstances" your imagination is deficient.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation. Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain. What definition of absolute certainty would you propose?
So you are proposing a definition that nothing can satisfy. That doesn't seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I'll agree with you. However if we want to talk about what people call "absolute certainty" it would be nice to have some agreed-on terms to use in discussing it. Saying "oh, there just ain't no such animal" doesn't lead anywhere. As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of "absolute certainty" for which purpose and in which context?
You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake - it is a mental state, after all - but rather that it cannot be reached using proper rational thought. What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down "infinity" after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite. The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the "correct" one, as the mathematician's brain has to somehow output "infinity" from the finite inputs it has been given, we can generate absolute certainty from finite evidence - it simply isn't correct. It doesn't correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician's infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world. While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn't so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absol
:-) As in, like, every single human being... Yep. Provided you limit "proper rational thought" to Bayesian updating of probabilities this is correct. Well, as long your prior isn't 1, that is. I'd say that if you don't require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you're not bothered by contradictions, well then, doublethink is like Barbie -- everything is possible with it.
Well, yes. That is the point. Nothing is absolutely certain.
Why does a deficient imagination disqualify a brain from being certain?
Vice versa. Deficient imagination allows a brain to be certain.
... ergo there exist human brains that are certain. if people exist that are absolutely certain of something, I want to believe that they exist.
So... a brain is allowed to be certain because it can't tell it's wrong?
Tangent: Does that work? []
Nope. "I'm certain that X is true now" is different from "I am certain that X is true and will be true forever and ever". I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.
In fact, unless you're insane, you probably already believe that tomorrow will not be Friday! (That belief is underspecified- "today" is a notion that varies independently, it doesn't point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
Not exactly that but yes, there is the reference issue which makes this example less than totally convincing. The main point still stands, though -- certainty of a belief and its time-invariance are different things.
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind. Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
I don't have to believe that the alternatives are impossible; I just have to be certain that the alternatives are not exemplified.
Define "absolute certainty". In the brain-in-the-vat scenario which is not impossible I cannot be certain of anything at all. So what?
So you're not absolutely certain. The probability you assign to "Today is Friday" is, oh, nine nines, not 1.
Nope. I assign it the probability of 1. On the other hand, you think I'm mistaken about that. On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it's not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it's Saturday, and your watch says Saturday, and the next thirty people you ask say it's Saturday... you would still believe it's Friday? If you think it's Saturday after any amount of evidence, after assigning probability 1 to the statement "Today is Friday," then you can't be doing anything vaguely rational - no amount of Bayesian updating will allow you to update away from probability 1. If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
While I'm not certain, I'm fairly confident that most people's minds don't assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it's not Friday, then that would suffice to prove that your mind's internal probability is not Friday. Most of the time, when people talk about probabilities or state the probabilities they assign to something, they're talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they're still just basically guesses at what's going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Taking a (modified) page from Randaly's book, I would define absolute certainty as "so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false". Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).

We can easily forgive a child who is afraid of the dark; the real tragedy of life is when men are afraid of the light.

misattributed often to Plato

The tired and thirsty prospector threw himself down at the edge of the watering hole and started to drink. But then he looked around and saw skulls and bones everywhere. "Uh-oh," he thought. "This watering hole is reserved for skeletons."

Jack Handey

So good even dead people want to drink it.

(Reference. [])
To be fair, if you see a watering hole surrounded by skeletons, it probably means the water's toxic.
That's the joke.
Ah. I thought it was something like "I won't drink from this because it's reserved for skeletons (and will therefore die and perpetuate the cycle)," which was just bizarre enough to be a joke.

Nobody can believe nothing. When a man says he believes nothing, two things are true: first, that there is something in which he desperately, perhaps dearly, wishes not to believe; and second that there is some unspoken thing in which he secretly believes, perhaps even unknown to himself.

John C Wright

Is there a name for the fallacy of claiming to be an expert on the specific contents of other people's subconsciouses?
This sounds like it implies that both things must be true. It seems to me that either would be sufficient to justify someone saying they believe nothing.

Start by doing what’s necessary; then do what’s possible; and suddenly you are doing the impossible.

St. Francis of Assisi (allegedly)

A luxury, once sampled, becomes a necessity. Pace yourself.

Andrew Tobias, My Vast Fortune

...Each minute bursts in the burning room,
The great globe reels in the solar fire,
Spinning the trivial and unique away.
(How all things flash! How all things flare!)
What am I now that I was then?
May memory restore again and again
The smallest color of the smallest day:
Time is the school in which we learn,
Time is the fire in which we burn.

--Delmore Schwartz, "Calmly We Walk Through This April's Day"; quoted by Mike Darwin on the GRG ML

I like it when I hear philosophy in rap songs (or any kind of music, really) that I can actually fully agree with:

I never had belief in Christ, cus in the pictures he was white

Same color as the judge that gave my hood repeated life

Sentences for little shit, church I wasn't feeling it

Why the preacher tell us everything gon be alright?

Knew what it was for, still I felt that it was wrong

Till I heard Chef call himself God in the song

And it all made sense, cus you can't do shit

But look inside the mirror once it all goes wrong

You fix your own problems, tame yo

... (read more)
It's quite sad that Tupac Shakur is the focus of so many conspiracy theories [] , because he was quite the sceptic about wasting your time on this stuff when there was real work to do making the world better [].
I always thought it was interesting that Tupac got all the conspiracy theories while Biggie got none, despite the fact that Biggie released an album called Ready to Die, died, then two weeks later released an album called Life After Death. It's probably because Tupac's music appeals more to hippie types who are into this kind of stuff.

Whatever alleged "truth" is proven by results to be but an empty fiction, let it be unceremoniously flung into the outer darkness, among the dead gods, dead empires, dead philosophies, and other useless lumber and wreckage!

Anton Lavey, The Satanic Bible, The Book of Satan II

Isn't it better to examine a falsehood to discover why it was so popular and appealing before throwing it away?

Then, to continue the metaphor, we should study it by telescope from afar, not as a present and influential entity in our own sphere of existence, but rather a distant body, informative but impotent, the object of curiosity rather than devotion.

— Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, p. 16
Only if they won't let you throw it away.

Wicked people exist. Nothing avails except to set them apart from innocent people. And many people, neither wicked nor innocent, but watchful, dissembling, and calculating of their chances, ponder our reaction to wickedness as a clue to what they might profitably do.

James Wilson

4Said Achmiz9y
Counter-quote [].
Only loosely. The insightful part of the grandparent quote is the third sentence, which complements the moral-greyness issue quite well.
4Said Achmiz9y
I think it is only slightly insightful, at best. It's a gross simplification of how most people experience, and actually (under-the-hood) perform, moral calculations, and it simplifies away most of the interesting stuff.

Historically, most hackers have been not only men, but men of a sort of Mannie O’Kelly-Davis “git ‘er done” variety, and that’s beginning to change now, so new norms of behavior must be adopted in order to create a welcoming and inclusive community.

  • Jeff Read

I have a better idea. Let’s drive away people unwilling to adopt that “git’r'done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors. That way we might continue to actually, you know, get stuff done.

Eric Raymond

Empirically, heaping scorn on everyone and seeing who sticks around leads to lots of time wasted on flame wars.

Empirically, heaping scorn on everyone and seeing who sticks around leads to lots of time wasted on flame wars.

Straw man. The grandparent explicitly made the scorn conditional, not 'on everyone'.

Failure to steel man. Replacing "everyone" with "people" leaves the basic point unchanged. ETA: ... or, I should say, leaves a point that (1) deserves reply and (2) was probably what the original hyperbolic version was getting at anyway.
Abuse of the 'steel man' concept and attempt to introduce a toxic social norm. I am strongly opposed to this influence. MixedNuts attempts [] to refute a quote using a non-sequitur. Supporting a false refutation is not being generous, it is being biased. It is being unfair to the initial speaker. So much so that it leaves the basic point a straw man.
Steel-manning a refutation does not equal supporting that refutation. In fact, steel-manning entails criticizing the original refutation, at least implicitly. However, when a claim is plausibly intended to be a hyperbolic version of a reasonable claim, pointing out that the hyperbolic version is a straw man, without addressing the reasonable version, is mostly just poisoning the discourse. (This charge doesn't apply to you if you sincerely believed that MixedNuts was non-hyperbolically claiming that literally everyone has scorn heaped on them in the community under discussion, or that MixedNuts would be read that way by many readers.)
I oppose your influence in this context for the aforementioned reasons. The point that you think is reasonable is still a straw man.
It would help me to understand why my version is a straw man if you would steel-man it. Then I could compare your steel man to my straw man and better feel the force of your criticism. (I certainly wouldn't take you to be supporting my straw man, which seemed to be your earlier concern.) As it stands, I am puzzled by your accusation because Eric Raymond said, "Let’s drive away people unwilling to adopt that 'git’r'done' attitude with withering scorn ...". Why is it a straw man to characterize this as "heaping scorn on people and seeing who sticks around"? Is it because you read it as "heaping scorn on people randomly...", rather than as "heaping scorn on people who are unwilling to adopt that 'git’r'done' attitude ..."? Or is it something else?
There isn't a convenient steel man available. Not all wrong (or, to be agnostic with respect to the correctness of our positions, disagreed with) positions have another position nearby in concept space that is agreed with (or, sometimes, disagreed with only with significant respect and more complicated reasoning). Because that is a different described procedure. They are similar in as much as scorn is applied in both cases but the selection process for when scorn is applied is removed and the intended outcome is changed. To illustrate, consider taking the required equivocation back in the other direction. We end up with: This seems to be a different empirical claim. It is also a more controversial claim and one that is less obviously correct. I certainly wouldn't expect scorn to be the optimal response in such circumstances but the claim that it wastes more time than the described alternative is still an empirical claim that would actually require empiricism to be done and cited. It isn't something that I have seen anywhere.
This was a helpful comment. I agree that, in general, wrong positions may lack steel-man versions. However, I am not convinced that this is the case here. Indeed, it seems to me that you provide just such a steel man in your comment. You are reading "seeing who sticks around" as the reason why the scorn is being applied. This is a possible reading. It might be the intended meaning, but it might not. The intended meaning might just be that "seeing who sticks around" is an outcome, and not the intended outcome. If the meaning was what you said, the sentence could have been written as "heaping scorn on people to see who sticks around". That would have been equally concise and less ambiguous. Since that wasn't what was written, your reading is less certain. Refutations of straw men are usually obviously correct. That is why straw men are offered. The steel man version of the straw-man-based refutation will rarely be so obviously correct, but it will be obviously better. The steel man will be more relevant, raise more important issues, be more likely to move the conversation forward in a productive way, and so on. You seemed to me to be offering just such a steel man when you wrote, Yes, your version is a different empirical claim, but steel men are generally different claims from the original "unsteeled" version. Your version raises controversial issues, but that need not obviate productive discussion. Most importantly, and as you point out, your steel man version raises empirical issues, which would help keep the conversation connected to reality. Moreover, addressing those empirical questions would probably require getting into the specific dynamics of the community under discussion. (What have the documented conversations in this specific community actually been like? What are the actual social dynamics and the actual history of how they've changed over time? What has this community accomplished, and under just what conditions, as a function of how much scor
I don't believe that it does, and here's why. Heaping scorn on everyone and seeing who sticks around is a selection process; the condition for surviving is being able to accept scorn, whether or not such scorn is warranted by the value system of the society. This is somewhat similar to hazing. Heaping scorn on a specific group of people for their unwillingness to adopt the values of the society (or, rather, some powerful subset of the society which has enough clout to control how things are run) is a selection process based on something of value to the society, and is more like punishment or selective admissions: people with the valued trait are encouraged, those without are allowed to leave. It would appear that there are very different implications, as the former selects those who can take unjustified scorn (a quality of dubious value), and the latter selects for any demonstrable quantity desired by the society (in this case, a specific attitude towards problem-solving).
This is a good argument for the claim that MixedNuts's hyperbolic version, read literally, misses something important. (Your argument convinces me, anyway.) It is not clear to me that your argument addresses the "steel man" version in which "everyone" is replaced by "people who are unwilling to adopt that 'git’r'done' attitude".

Empirically, heaping scorn on everyone and seeing who sticks around

Eric Raymond isn't suggesting that. Why are you?

A relevant example: [] Linux kernel seems to me a quite well-managed operation (of herding cats, too!) that doesn't waste lots of time on flame wars.

Linux kernel seems to me a quite well-managed operation (of herding cats, too!) that doesn't waste lots of time on flame wars.

I don't follow kernel development much. Recently, a colleague pointed me to the rdrand instruction. I was curious about Linux kernel support for it, and I found this thread:

Notice that Linus spends a bunch of time (a) flaming people and (b) being wrong about how crypto works (even though the issue was not relevant to the patch).

Is this typical of the linux-kernel mailing list? I decided to look at the latest hundred messages. I saw some minor rudeness, but nothing at that level. Of course, none of these messages were from Linus. But I didn't have to go back more than a few days to find Linus saying things like, "some ass-wipe inside the android team." Imagine if you were that Android developer, and you were reading that email? Would that make you want to work on Linux? Or would that make you want to go find a project where the leader doesn't shit on people?

Here's a revealing quote from one recent message from Linus: "Otherwise I'll have to start shouting at people again." ... (read more)

Actually, that depends. Mostly that depends on what the intent (and context) of calling me an idiot in public is. If the intent is, basically, power play -- the goal is to belittle me and elevate himself, reassert his alpha-ness, shift blame, provide an outlet for his desire to inflict pain on somebody -- then no, I'm not going to put up with it. On the other hand, if this is all a part of a culturally normal back-and-forth, if all the boss wants is for me to sit up and take notice, if I can without repercussions reply to him in public pointing out that it's his fat head that gets into his way of understanding basic things like X, Y, and Z and that he's wrong -- I'm fine with that. The microcultures of joking-around-with-insults exist for good reasons. Nobody forces you to like them, but you want to shut them down and that seems rather excessive to me.
I think it's pretty clear that Linus is more on the power-play end of the spectrum. Notice his comment above about the Android developer; that's not someone who is part of his microculture (the person in question was a developer on the Android email client, not a kernel hacker). And again, the shouting-as-punishment thing shows that Linus understands the effect that he has, but doesn't care. Also, Linus, as the person in the position of power, isn't in a position to judge whether his culture is fun. Of course it's fun for him, because he's at the top. "I was just joking around" is always what bullies say when they get called out. The real question is whether it's fun for others. The recent discussion (that presumably sparked the quotes in this thread) was started by someone who didn't find it fun. So even if there are some "good reasons" (none of which you have named), they don't necessarily outweigh the reasons not to have such a culture.
That's not clear to me at all. Note that management of any kind involves creating incentives for your employees/subordinates/those-who-listen-to-you. The incentives include both carrots and sticks and sticks are punishments and are meant to be so. If you want to talk about carrots-only management styles, well, that's a different discussion. I disagree. You treat fun and enjoyment of working at some place as the ultimate, terminal value. It is not. The goal of working is to produce, to create, to make. Whether it's "fun" is subordinate to that. Sure, there are feedback loops, but organizations which exist for the benefit of their employees (to make their life comfortable and "fun") are not a good thing.
For what it's worth, I've never worked at a place that successfully used aversive stimulus. And, since the job market for programmers is so hot, I can't imagine that anyone would willingly do so (outside the games industry, which is a weird case). This is especially true of kernel hackers, who are all highly qualified developers who could find work easily. I would point out that Linus Torvalds's autobiography is called "Just for Fun". Also, Linus doesn't have employees. Yes, he does manage Linux, but he doesn't employ anyone. I also pointed out a number of ways in which Linus's style was harmful to productivity.
Ahem. I think you mean to say that you never touched the electric fence. Doesn't mean the fence is not there. Imagine that someone at your workplace decided not to come to work for a week or so, 'cause he didn't feel like it. What would be the consequences? Are there any, err... "aversive stimuli" in play here? No need for imagination. The empirical reality is that a lot of kernel hackers successfully work with Linus and have been doing this for years and years. Which means that anyone who doesn't like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.
As of 2012-04-16, 75% of kernel development is paid []. I would assume those developers would find their jobs in jeopardy if Linus removed them from development.
Um, Linux kernel doesn't work like that. Linus doesn't "add" anyone to development or "remove" anyone. And I don't know if companies who pay the developers would be likely to fire them if the developers' patches start to get rejected on a regular basis. Oh, and you misquoted your source. It's not 75% of developers, it's 75% of the share of kernel development and, of course, some developers are much more prolific than others.
Certainly he and his team are less likely to accept patches from people who they've had trouble with in the past? And people who have trouble getting patches accepted (for whatever reason) are probably not going to be paid to continue doing kernel development? It would surprise me if he's never outright banned anyone. Thanks for the correction, edited my comment above.
You are describing a (dubious) difference in word use, not a difference in how the world works.
I don' t think so -- it is a difference in how the world works. Anyone in the world can submit kernel patches. The filtering does not occur at the people level, it occurs at the piece-of-code level. Linus does not say "I pronounce you a kernel developer" or "You're no longer a kernel developer" -- he says "I accept this patch" or "I do not accept this patch".
No, I mean that touching the electric fence did not make me a more productive worker. I'm not saying that Linus's style will inevitably lead to instant doom. That would be silly. I'm saying that it's not optimal. Linux hasn't exactly taken over the world yet, so there's definitely room for improvement.
It's important to distinguish between Linux the operating system kernel, and the complete system of GNU+Linux+various graphical interfaces sometimes called "Linux". The Linux kernel can also be used with other userspaces, eg. Busybox or Android, and it's very popular in these combinations on embedded systems and phones/tablets respectively. GNU+Linux is popular on servers. The only area where Linux is unsuccessful is desktops, so it's unfortunate that desktop use is so salient when people talk about "Linux". Linus only works on the kernel itself, and that's making great progress towards taking over the world.
Yes, I used to work for RMS; I am well aware of the difference. I should also note that most of the systems you mention use proprietary kernel modules; it would be better if they didn't, and perhaps if Linus's attitude were different, there would be more interest in fixing the problem. Also, desktops are where I spend most of my time, so I think they still matter a lot.
I use GNU+Linux on the desktop myself, and I share RMS's goals, although I'm willing to make bigger compromises for the sake of practicality than him. Linus does not share RMS's goals, so my point is that from Linus's point of view his management techniques are highly effective.
Pure hypothesis: Linux being unsuccessful on desktops is not a coincidence, because Linux is written in a low-empathy environment, but writing UI for the general public means that you don't get to blame users when they don't like your software. Possible test: Firefox is fairly good open source software for the general public. What's the culture at Mozilla/Firefox like for the programmers?
Um. The claim by novalis is that the Linux kernel is written in a "low-empathy" environment. The kernel has nothing to do with UI which, along with most applications, is quite separate. Linus has no influence over UI design or user-friendliness in general. There are two main GUI environments on Linux -- Gnome [] and KDE []. I don't know what the atmosphere is for developers inside these organizations. I think there is a fair amount of infighting and office politics, but I have no clue if they are polite and tactful about it.
You know what Ubuntu is named after [], BTW?
Yes, I do, though I don't see the relevance.
(Evidence about whether the Ubuntu people are ‘friendly’.)
It's evidence in the same sense that the name of product like Repairwear Laser Focus Wrinkle & UV Damage Corrector [] is evidence that this face cream laser focuses your wrinkles and corrects your UV damage 8-/ "Ubuntu", by the way, means a lot more [] than friendliness.
How do you know? How do you know? (other than in a trivial sense that anything in real life is not going to be optimal) You're making naked assertions without providing evidence.
Well, I can tell you that afterwards, I felt like shit and didn't get much done for a while. Or I started looking for a new job (whether or not I ended up taking one, this takes time and mental energy away from my current job). And getting yelled at has never seemed to me to correlate with me actually being wrong, so I'm not clear on how it would have changed my behavior. Upthread, you linked to an article which quotes someone saying, "Thanks for standing up for politeness/respect. If it works, I'll start doing Linux kernel dev. It's been too scary for years." I also pointed out, in my discussion of the rdrand thread, that Linus wastes a bunch of time by being cantankerous. And speaking of the rdrand thread (which I swear I didn't choose as my example for this reason; I really did just stumble across it a few weeks ago), your linked article also quoted Matt Mackall, whom Linus yelled at in that thread: he's no longer a kernel hacker. Is Linus's attitude why? Well, he's complained about Linus's attitude before, and shortly after that thread, he ceased posting on LKML. And he's probably pretty smart -- he wrote Mercurial -- so it's a shame for the kernel to lose him. I can tell you that I, personally, would be uninterested in working under Linus, although kernel development isn't really my area of expertise, so maybe I don't count.
I hope you didn't take my position to be that yelling at people is always the right thing to do. There certainly is lots of yelling which is stupid, unjustified, and not useful in any sense. The issue is whether yelling can ever be useful. You are saying that no, it can never be. I disagree. The secondary issue is whether Linus runs kernel development in a good/proper/desirable/productive way. The major question here is the metric -- how do we decide what is a "good/... way". From your point of view, if you define a good way as "fun" for developers, then sure, it probably is possible to run the kernel in a more fun way. From my point of view, the proof of the pudding is in the eating. Is the kernel a good piece of software? I would argue that it is, and that it is a remarkably successful piece of software. More, I would argue that Linus deserves a lot of credit for making it so. Given this, I'm suspicious of claims that Linus' way is "non-optimal", especially if there is the strong underlying current of "I, personally, don't like it".
No, the issue is whether Linus's yelling is useful, or, whether yelling is generally useful enough in free/open source projects that it outweighs the costs. Specifically, whether "Let’s drive away people unwilling to adopt that “git’r'done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors. That way we might continue to actually, you know, get stuff done." is good or bad advice. You should be even more suspicious, then, of Linus saying that it's necessary and proper, given that he's said that he, personally, does like it.
Do you think we have a basic difference in values or there's some evidence which might push one of us towards the other one's position? He has the huge advantage in that he actually delivered and continues to deliver. His method is known to work. Beware the nirvana fallacy [].
That's a pretty good question. Hypothesis: I think some of it might be a case of the "Typical Mind Fallacy". Maybe if Linus yelled at you, you wouldn't be bothered at all. But I know that my day would be ruined, and I would be less productive all week. So I assume that many people are like me, and you assume that many people are like you. I would be curious about a controlled experiment, where free/open source project leaders were told to act more/less like Linus for a month to see what would happen. But I guess that's pretty unlikely to happen. And one confounder is that a lot of people might have already left (or never joined) the free/open source community because of attitudes like Linus's. We could measure project popularity (say, by number of stars on github) against some rating of a project's friendliness. We might also survey programmers in general about what forces do/don't encourage them to work on specific free/open source projects. I'm sure there are studies available of what sorts of management are effective generally. I'll ask my MBA friend. I did a two-minute Google search for studies about what cause people to leave their jobs generally, but found a such a variety of conflicting data that I decided it would need more time than I have. These things could definitely influence me to change my mind. I also think there might be a value difference, in that I do value fun pretty highly. That's especially true in the free/open source world, where nobody's getting rich, and where a lot of people are volunteers (this last is less true on Linux than on some other projects, but perhaps part of that is that all of the volunteers have been driven away)? But in general, I would like to enjoy the thing I spent eight (or twelve) hours a day on. And if even if this did make me somewhat less productive than I would be if I was less happy, I don't really mind that much.
Yes, I think the Typical Mind Fallacy plays some role in this. But then let's explicitly go around it. Let's postulate that the population of, say, qualified programmers, is diverse. Some are shy wallflowers, wilting from any glance they perceive as disapproving, some thrive in a rough-and-tumble environments where you prove your solution is better by smashing your opponent into bits. Most are somewhere in between. This diverse population would self-sort by preferences -- the wallflowers would gravitate towards polite, supportive, never-a-harsh-word environments (in our case, OSS projects), while the roar-and-smash types will gravitate towards the get-it-done-NOW-you-maggot environments. Since OSS projects are easy to create and it's easy for developers to move from project to project, the entire system should evolve towards an equilibrium where most people find the environment they're comfortable with and stick with it. Now, that seems to me a fine way for the world to work. But would you object to such a state of the world, after all, there are some projects there which are "mean" and where you (and likely some other people) would be uncomfortable and unproductive? Oh, there are piles and piles of those. The only problem is, they all come to different conclusions (with a strong dependency on the decade in which the study was done). Put yourself into manager's shoes and consider the difference between instrumental and terminal values. You, an employee/contributor, value fun highly. That is a terminal value for you. Being productive is a secondary goal and may also be an instrumental value (some but not all people are not having fun if they see themselves as being unproductive). Now, for a manager, the fun of his employees/contributors/developers is NOT a terminal value. It's only an instrumental value, the true terminal value is to Get Shit Done. Do you see how that leads to different perspectives?
Creating projects is easy; forking is hard. And nobody wants to create a new kernel from scratch. Kernel hackers don't really have a lot of options. So I don't think your theoretical world has anything to do with the real world. Also, it seems to me that culture doesn't end up contained within a single project; Linux depends on GCC, for instance, so the Linux people have to interact with the GCC people. Which means that culture will bleed over. I was recently at a technical conference and a guy there said, "yeah, security is perhaps the only community that's less friendly than Linux kernel development." So now it's not just one project that's off-limits, but a whole field. I also don't think there are necessarily any actual roar-and-smash types. That is, I think a fair number of people think it's fun to lay a beatdown on some uppity schmuck. I've experienced that myself, certainly. Why else would anyone bother wasting time arguing with creationists? But I'm not sure there are a lot of people who find it fun to be on the losing end of this. This is an extension of Arguments as Soldiers. When you're having a knock-down, drag-out fight with someone, it's harder to back down. Notice that the original example of a person in that category was Mannie O'Kelly -- a fictional character. [Linus]: (later in that email, he does give a nod to effectiveness, but that doesn't seem to be his primary motivator). I think it remains an open question whether Linus's style is in fact better than the alternative from the "get shit done" perspective. And the original quote implied, without evidence, that in fact it is. Not really sure why this is a "rationality" quote.
Forking is pretty easy -- it's getting people to follow your fork that's hard. Well, there are certainly enough programmers who prefer to discuss code in terms of "only a brain-dead moron could write a library that does foo" or "why is this retarded object making three fucking calls to the database for each invocation", etc. And while people generally don't find it fun to be on the losing side, this does not stop them from seeking and entering competitions and competitive spheres. Consider sports, e.g. boxing or martial arts. Steelman this. I am pretty sure that in the North European culture being "subtle or nice" is dangerously close to being dishonest. You do not do anyone a favour by pretending he's doing OK while in reality he's clearly not doing OK. There is a difference between being direct and blunt - and being mean and nasty. As I said, Linus' style is proven to work. We know it works well. An alternative style might work better or it might not -- we don't know. I suspect you have a strong prior but no evidence.
I don't understand what you're saying here. Are you saying that anyone is proposing that Linus to act in a way that he would see as dishonest? Because I don't think that's the proposal. Consider the difference between these three statements: * Only a fucking idiot would think it's OK to frobnicate a beezlebib in the kernel. * It is not OK to frobnicate a beezlebib in the kernel. * I would prefer that you not frobnicate a beezlebib in the kernel. The first one is rude, the second one is blunt, the third one is subtle/tactful/whatever. Linus appears to think that people are asking for subtle, when instead they're merely asking for not-rude. Blunt could even be: * When you frobnicate a beezlebib, it fucks the primary hairball inverters, so never do that. So he doesn't even have to stop cursing. There are many FOSS projects that don't use Linus's style and do work well. What's so special about Linux? I've run a free/open source project; I tried to run it in a friendly way, and it worked out well (and continues to do so even after all of the original developers have left). I can also point to Karl Fogel's book "Producing Open Source Software", where he says that rudeness shouldn't be tolerated. He's worked on a number of free/open source projects, so he's had the chance to experience a bunch of different styles.
We keep hitting the Typical Mind Fallacy over and over again :-) Let me offer you my interpretation: the first one is blunt and might or might not be rude, depending on what the social norms and context are (and on whether thinking about frobnicating the beezlebib does provide incontrovertible evidence of severe brain trauma). The second one is not blunt at all, it's entirely neutral. The third one is a slighly more polite version of neutral. Your fourth example is still neutral, by the way -- there's nothing particularly blunt about explaining why something should not be done (or about using four-letter words, for that matter). To contrast I'll offer my examples: * (rude) You are a moron and can't code your way out of a wet paper bag! Stuff your code where the sun don't shine and never show it to me again! * (blunt) This is not working and will never work. You need to scrap this entirely and start from scratch. * (subtle) While this is a valuable contribution, we would really appreciate it if you went and twiddled the bogon emitter for us while we try to deal with the beezlebib frobnication on our own. It's only the most successful open software ever. Otherwise, not much :-P
I recently came across this, which seems to have some evidence in my favor (and some irrelevant stuff): []
A more direct approach might be: "no patches which frobnicate a beezlebib will be accepted". I would say the size (in terms of SLOC count), scope (everything from TVs to supercomputers), lack of a equivalent substitute (MySQL or Postgres? Apache or Nginx? Linux or... BSD?), importance of correctness (its the kernel, stupid), and commercial involvement (Google, Oracle, etc.) make it very different from most FOSS projects. Mostly I'd say the size, complexity and very low tolerance of bugs. I have no idea if Linus's attitude is helpful or not. I tend to think he could do better with more direct, polite approaches like the above, but I don't hold that belief very strongly.
Posts like this [] encourage me to remark that I want to have a website where I feel free to respond to others' actual words, not by how I'd rationalize those words if I were personally committed to them.
I agree. My further comments shouldn't detract from this fact. I don't agree. Every CS student and their mother wants to write their own OS. There are a [] lot [] [of] projects [] out there. As to the effectiveness of the community, there's an important datapoint. BSD came before Linux, but Linux took over the world. I think this is generally attributed to a more vibrant community of developers.
The right comparison is to compare that to how much you'd be bothered if you had to clean up the mess left by an incompetent coworker. Or having to deal with an incompetent bogon [] in middle management.
Unsurprisingly, I've had to deal with both of these things. It has never seemed to me that yelling at someone could make them more competent. Educating them, or firing them and replacing them seems like a better plan.
The issue is whether the person in question would have been a productive contributor.
Well Bill Gates and Steve Jobs have similar reputations.
Bill Gates failed to create an organization that would thrive in his absence. We'll see how Steve Jobs did in a few more years (it seems likely that he did better, but he also had the famous "reality distortion field", which Linus doesn't). Steve Jobs also got kicked out of his own company for a bunch of years.
During which time the company tanked. In any case, your argument was that Linus might have better succeeded in "taking over the world" if he had used a less confrontational style. My point is that the people who did "take over the world" used the same style.
Punishments seem to have rapidly decreasing returns, especially given the availability of alternatives that are less abusive. Otherwise we'd threaten to people when we wanted to make them more productive, rather than rewarding them - which most of the time we don't above a low level of performance.
This is a shift of topic-- heaping scorn is one particular sort of punishment. Firing someone who isn't working after having given them several warnings is a punishment, but it isn't the same as a high-flame environment.
I don't understand the point that you are arguing. Basically all human groups -- workplaces, societies, countries, knitting circles -- have punishments for members who do unacceptable things. The punishments range from a stern talking to, ostracism, or ejection from the group to imprisonment, torture, and killing. In which real-life work setting you will not be punished for arbitrarily not coming to work, for consistently turning in shoddy/unacceptable results, for maliciously disrupting the workplace?
Of course all societies have punishments, but that doesn't address the point you were responding to which was that Linus was more on the power-play end of the spectrum. The ratio of reward to punishment, your leverage as determined by the availability of viable alternatives, matters in determining which end of that spectrum you're on. And that has implications for the quality of work you can get from people - while you may be punished for blatantly shoddy work, you're not going to be punished for not doing your best if people don't know what that is. The threat of being fired can only make people work so hard.
Um. How do you determine the ratio of reward to punishment for Linux kernel developers? Also whether you engage in power play is determined by your intent, not by ratio or leverage. Those determine the consequences (accept/revolt/escape) but not whether the original critique was legitimate or purely status-gaining.
You bring up some good points. I would go so far as to say that given a) the amount of subjective interpretation from the observers, b) the limited number of first-hand witnesses, and c) the difficulty of comparing the small number of sample societies for which we have observers, that in the absence of evidence roughly the strength of a formal study, this thread may not be able to reach an agreeable conclusion for lack of data.
The claim, as I understand it, is that the culture trades off fun for productivity. A common example given is Apple, where Steve Jobs was a hawk that excoriated his underlings, and thus induced them to create beautiful, world-conquering products.
Also that the culture selects for the people who find being productive fun.
While the more socially enlightened attitudes lead to very effective and high signal-to-noise conflict handling, as can be observed on Tumblr and MetaFilter?
[-][anonymous]9y 13

Here's my thought process upon reading this. (Initially, I assumed “git 'er done” meant something like ‘women are unimportant except as sex objects, and I misread “unwilling” as “willing”.)

  • ‘How comes that guy, who when talking about sex on his blog gets mind-killed to the point of forgetting how to do high-school maths, makes so much sense everywhere else? Maybe he was saner when younger, then got worse with age, or something.’ I follow the link, expecting it to go to somewhere other than Armed and Dangerous, e.g. somewhere on
  • I notice the link does go to his blog, and to a recent post at that. ‘So he is still capable of talking sense about such topics after all?’ I notice I am confused.
  • I realize he said “unwilling” not “willing”. ‘Er... Nope. He's crazy as usual.’
  • Appalled at the idea that anyone, even ESR, would say anything like that in public with an almost straight face, I decide to look “git 'er done” up. ‘Oh, that makes perfect sense, and I agree with him. But that's not about sex (except insofar as the cut-through-the-bullshit communication style is less rare among men than among women), so that doesn't actually show he's not mind-killed beyond all repair.’

(A... (read more)

Not necessarily, it might just encourage further frivolous complaints.
As opposed to feeding trolls, which is widely known to be extremely effective in making them shut up?
In the context the group you position here as 'trolls' are described as frivolous complainers. You advocate [] apologising and complying. Eugine is correct in pointing out that this can represent a perverse incentive (both in theory and in often observed practice).
I dunno... if someone's goal is to fuel a flamewar to discredit you, it would seem to me that ranting about that is more likely to make their day than just reacting as though they had pointed out you misspelled their name and then going back to your business.
The courtesy rules at LW are pretty strict. I don't know whether things are different at CFAR and MIRI, but does insufficient scorn interfere with things getting done?
We use the karma system for that.
LW uses a karma system. I assume that CFAR and MIRI include a lot of in person and private conversation which isn't subject to a karma system. How do you think the effectiveness of cultures which have karma + courtesy compares to cultures which permit flaming?
In the thread, there were at least a couple of examples of high-verbal-abuse programming cultures (Apple and Linux) which get significant amounts of useful work done, and I think there were more. I don't believe that scorn just gets dumped on people who don't have a git'r'done attitude-- there have certainly been flame wars about the best programming language and operating systems, and no doubt about other legitimate differences of opinion. Still, I'm wondering about successful programming environments which enforce courtesy rules. The only one I can think of is dreamwidth from its self-description. Running a livejournal clone isn't nothing, but it also isn't as much as inventing new products. Any others?
So I asked a friend about courteous programming environments, and he mentioned a couple that he's worked at: Webmethods, renamed as Novell Business Service Management Managed Objects at Software AG Anyone know where Google fits on the courtesy to flame spectrum? How about Steam?
There is a bit of a difference between commercial, for-profit companies (especially public ones) and FOSS projects.

Except when physically constrained, a person is least free or dignified when under the threat of punishment. We should expect that the literatures of freedom and dignity would oppose punitive techniques, but in fact they have acted to preserve them. A person who has been punished is not thereby simply less inclined to behave in a given way; at best, he learns how to avoid punishment. Some ways of doing so are maladaptive or neurotic, as in the so­ called 'Freudian dynamisms'. Other ways include avoid­ing situations in which punished behaviour is likely t

... (read more)
Very close. I'd perhaps suggest that a person is less dignified when desperately seeking a reward that certainly isn't going to come.

How do you know that it will bring out his genius, Graff? It's never given you what you needed before. You've only had near-misses and flameouts. Is this how Mazer Rackham was trained? Actually, why isn't Mazer Rackham in charge of this training? What qualifications do you have that make you so sure your technique is the perfect recipe to make the ultimate military genius?

-- Will Wildman, analysis of Ender's Game

[-][anonymous]9y 7

The world is a lot simpler than the human mind can comprehend. The mind endlessly manufactures meanings and reflects with other minds, ignoring reality. Or maybe it enhances it. Not very clear on that part, I'm human as well.

I found this to be slightly unsettling when I realized it, though we may be talking about different things.

What the Great Learning teaches is: to illustrate illustrious virtue; to renovate the people; and to rest in the highest excellence.
The point where to rest being known, the object of pursuit is then determined; and, that being determined, a calm unperturbedness may be attained to.
To that calmness there will succeed a tranquil repose. In that repose there may be careful deliberation, and that deliberation will be followed by the attainment of the desired end.
The ancients who wished to illustrate illustrious virtue throughout the world, first ordered well t

... (read more)

It is not July. It is August.

[This comment is no longer endorsed by its author]Reply

Saw this under "latest rationality quotes" and was like "man, I'm really missing the context as to how this is a rationality quote."

"If it July, I desire to believe it is July. If it is August, I desire to believe it is August..."

If the Romans had been more willing to rename months they were unwilling to keep in their original places, we might have a much saner calendar.

If people in the 1500 years since the Romans had been more willing to rename months...

Now you've got me thinking about the minimum level of rationality/processing power necessary to determine the month accurately...
Fixed! The perils of copy/paste.

There are no happy endings. Endings are the saddest part, So just give me a happy middle And a very happy start.

-Shel Silverstein

But but peak/end rule!

X will never reach [arbitrary standard], so let's not try to improve X.
I think the point is not that endings are generally and extrinsically sad, but rather that by definition, an ending is a thing which is sad, if we take the existence of such a thing to be good. (The ending of a bad thing, for example, is an exception, though generally because it allows for the existence of good things). The response, then, would not to be to try to improve endings, but rather to try to do away with them (and, barring that, improve the extrinsic qualities of the non-ending parts).

When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.

-- John McCarthy

8Eliezer Yudkowsky9y
Thus, whenever you look in a computer science textbook for an algorithm which only gives approximate results, you will find that the algorithm itself is very vaguely specified, since the result is just an approximation anyway. (I would have said: "When a concept is inherently fuzzy, it is a waste of time to give it a definition with a sharp membership boundary.")
Thus we merely require citizens to "be responsible adults" before they can vote rather than give a sharp boundary such as 18 years old, college applications tell you "don't write a long, rambling essay" rather than enforce a 500-word limit, and food packaging specifies "sometime in September" for the expiration date. Sharp membership boundaries are useful to make it easy to test for the concept. Even if the concept is fuzzy and the test is imperfect, this doesn't need to be a waste of time.

Sharp membership boundaries, however, often result in people forgetting the fuzziness of the concept - there are some people who vote without being responsible adults, because they can; an essay can be boring and rambling at 450 words or impressive and concise at 600; and food can be good a bit past its expiration date (it doesn't usually go in the other direction in my experience, presumably because the risk of eating spoiled food vastly outweighs the risk of mistakenly tossing out good food, so expiration dates are the very early estimates).

Though sometimes it's even more useful to acknowledge that the sharp-boundaried concept we're testing for is different from, though perhaps expected to be correlated with in some way, the fuzzy concept we were initially interested in. That helps us avoid the trap of believing that 17-year-olds aren't responsible adults but 18-year-olds are, or that 550-word essays are long and rambling but 450-word essays aren't, or that food is safe to eat on September 25 but not on September 29. None of that is true, but that's OK; we aren't actually testing for whether voters are responsible adults, essays are long and rambling, or food is expired.
Just because humans do it doesn't mean it's a good idea.
To clarify, I also think all of these are good ideas; not necessarily the best possible, but definitely useful.
It doesn't prove it's a good idea, but it's evidence in its favour.
Well, sure. But that doesn't mean it's very strong evidence: I'd expect to see an average human (or nation) do something stupid almost as often as they do something intelligent.
We are obviously starting from very different premises. To me, the fact that lots of people do something is very strong evidence that the behaviour is, at least, not maladaptive, and the burden of proof is very much on the person suggesting that it is. And the more widespread the behaviour, the stronger the burden. Alternatively, you could just look at the evidence. When legal systems have replaced bright-line rules with 15-factor balancing tests, has that led to better outcomes for society as a whole? Consider in particular the criteria for the Rule of Law []. In the mid-20th century, co-incident with high modernism and utilitarianism, these multi-part, multi-factor balancing tests were all the rage. Why are they now held in such disdain?
Unfortunately, the fact that lots of people do something may merely be an indication of a very successful meme: consider major religions. I will certainly grant that having a sharp restriction is better than a 15-factor balancing test, but I'm not arguing for 15-factor balancing tests. I'd go further, but I've just noticed that I don't really have much evidence for this belief, and I should probably go see how accomplished Chinese universities (which judge purely off the gaokao) are versus American universities first.

Also: "Fuck every cause supported by compulsory taxation, and compulsory use of fiat currency." says the same exact thing, more precisely.

Huh? No it doesn't. It says an entirely different thing.

All experience is an arch wherethrough gleams that untravelled world, whose margin fades for ever and for ever when I move...

To follow knowledge like a sinking star, beyond the utmost bound of human thought.

Alfred, Lord Tennyson, Ulysses

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.

Fred P. Brooks, No Silver Bullet

I've always had misgivings about this quote. In my experience about 90% of the code on a large project is an artifact of a poor requirement analysis/architecture/design/implementation. (Sendmail comes to mind.) I have seen 10,000-line packages melting away when a feature is redesigned with more functionality and improved reliability and maintainability.

This is true, but the connotations need to be applied cautiously. Complexity is necessary, but it is still something to be minimised wherever practical. Things should be as simple as possible but not simpler.
More concretely, sometimes software can be simplified and improved at the same time.
This isn't necessarily true if the complexity is very intuitive. If it takes ten thousand lines of code to accurately describe the action "jump three feet in the air", then those ten thousand lines of code are describing what a jump is, what to do while in mid-air, what it means to land, and other things that humans may grasp intuitively (assuming that the actor is constructed in a manner similar to a human). Additionally, there are some complex features which are not specific to the software. We don't need to describe how a particular program receives feedback from the motor and sensors, how it translates the input of its devices, if these features are common to most similar programs - the description of those processes is part of the default, part of the background that we assume along with everything else we don't need to derive from fundamental physics. In other words, the complexity of software may correspond to a feature which humans may be able to understand as simple - because we have the prior knowledge necessary, courtesy of common nature and nurture. A full description of complexity is necessary if and only if it is surprising to our intuition.
That is, in some sense, his point - a phrase like "jump three feet in the air" does abstract most of the computational essence, making it seem like a trivial problem what it really, really isn't.

An educated mind is, as it were, composed of all the minds of preceding ages.

Le Bovier de Fontenelle

An educated mind is, as it were, composed of all the minds of preceding ages.

This explains all those urges I get to burn witches, my talent at farming, all my knowledge at hunting and tracking and my outstanding knack for feudal political intrigue.

(Composition is not the relationship to previous minds that education entails. Can someone think of a better one?)


Much better.
We rest upon the frontal lobes of giants.
Is that a praise of educated minds, or a caution against too readily classifying a mind as educated? (Possibly related: [])
I read it as expressing the same view as The Neglected Virtue of Scholarship [].
From the description of him on Wikipedia, I am certain it is the former, although the bone wedrifid picks with "composed" is symptomatic of where he falls short of his contemporary, Voltaire. He was a most refined, civilised, intelligent, and educated writer, very popular among the intellectual class, and achieved memberships of distinguished academic societies, but his strength, a great one indeed, was in writing well on what was already known, and he created little that was new. Voltaire's name lives to this day, but Fontenelle's, while important in his time, does not. Scholarship is indeed a virtue [], but Fontenelle's was not in service of a higher goal [].

It is fashionable in the US to talk about people who are on welfare and don’t work. That is not precisely true. Yes, there are people on welfare who neither have a regular job nor look for one. But what might not be understood is that these people are working: they are navigating the labyrinthine bureaucracy and making sure they meet all the guidelines to keep the money flowing. That is work. It is just not productive work. It is a work that is the result of perverse incentives.

Sarah Hoyt

I see small examples everywhere I look; they're just too specific to point the way to a general solution.

James Portnow/Daniel Floyd

It ain’t ignorance [that] causes so much trouble; it’s folks knowing so much that ain’t so.

Josh Billings

(h/t Robin Hanson)

Famously subverted by Ronald Reagan as:
How is that a subversion? It is exactly in accord with the original.
The key phrase is "our liberal friends." Everyone suffers from illusion of transparency, Dunning-Kruger, and etc., but Reagan is applying the bias selectively.

More of an anti-death quote, but:

"“Must I accept the barren Gift?
-learn death, and lose my Mastery?
Then let them know whose blood and breath
will take the Gift and set them free:
whose is the voice and whose the mind
to set at naught the well-sung Game-
when finned Finality arrives
and calls me by my secret Name.

Not old enough to love as yet,
but old enough to die, indeed-
-the death-fear bites my throat and heart,
fanged cousin to the Pale One's breed.
But past the fear lies life for all-
perhaps for me: and, past my dread,
past loss of Mastery and life,
the S... (read more)

Faced with the task of extracting useful future out of our personal pasts, we organisms try to get something for free (or at least at bargain price): to find the laws of the world -- and if there aren't any, to find approximate laws of the world -- anything at all that will give us an edge. From some perspectives it appears utterly remarkable that we organisms get any purchase on nature at all. Is there any deep reason why nature should tip its hand, or reveal its regularities to casual inspection? Any useful future-producer is apt to be something of a

... (read more)

Sages and scientists heard those words, and fear seized them. However, they disbelieved the horrible prophecy, deeming the possibility of perdition too improbable. They lifted the starship from its bed, shattered it into pieces with platinum hammers, plunged the pieces into hard radiation, and thus the ship was turned into myriads of volatile atoms, which are always silent, for atoms have no history; they are identical, whatever origin they have, whether it be bright suns, dead planets or intelligent creatures, — virtuous or vile — for raw matter is same

... (read more)
Not quite seeing the applicability as a rationality quote; but in "it's bed" you should drop the apostrophe.
I'd say it's highlighting the human fallacy to try to ignore and escape from bad news. Instead of facing this prophecy, they just destroyed the ship that delivered it to them and told themselves they were safe.
Actually, prophesy was about the ship; the spaceship crashed into Aragena, their planet, and then curious inhabitants looked inside (and found nothing dangerous). After that came the messenger of their King and told them that they all are doomed. And they indeed were.
I imagine there's an implied "and then the Reapers came" or something.
Probably I'm incredible late with that, but: a) thank you, embarrassing mistake fixed b) I was fascinated with the "volatile atoms" bit. It feels like a line taken from a poem on reductionism. I'm not sure that I managed to convey it because I'm not so much versed in English fiction and poetry. Also, I liked their safety measures, it's a pity they hadn't worked in the end.

Everything can be reduced to an abstraction, a puzzle, and then solved

-Ledaal Kes (Exalted Aspect Book: Air)

Are they a villain who "solves" people by removing them from their way? (Alternative response: Does "everything" include the puzzle of identifying something that can't be reduced to a puzzle?)
... You can remove people as problems without doing so euphemistically, i.e. killing them. If you befriend them, for example. And, well, yes. That does count as a puzzle.
The statement just seems weird without any context, I guess. It certainly isn't narrow []. Would you trust an AI that was being friendly to you as an attempted "solution" to the "puzzle" you presented?
That depends, what sort of solution is it trying to find? If it's trying to maximize my happiness, that's all fine and dandy; if it's trying to minimize my capacity as an impediment to its acquisition of superior paperclip-maximizing hardware, I would object. Either way, I base my trust on the AI's goal, rather than its algorithms (assuming that the algorithms are effective at accomplishing that goal).
Well, no, but I would never trust an AI if I couldn't prove (or nobody I trusted could prove) it was Friendly with respect to me, period. ... not that it would much matter, but.. Also, relevance? I'm not really understanding your point in general. Certainly, problems need to be solved, but I would hope that your morality is included as a constraint...
But not necessarily if you're a fictional character, hence my initial question. I think my point is that I'm not convinced the quote actually means anything, either in its original context or in its use here; it's sounding like "everything" just means "things for which the statement is true".
Still don't understand. By definition, if something is hampering you, it presents a problem: sometimes the solution is "leave it alone, all possible 'solutions' are actually worse," but it's still something that bears thinking about. It is somewhat tautological, I'll grant, but us poor imperfect humans occasionally find tautologies helpful.
This is similar to how I've interpreted it. The character comes from a pre-enlightenment society, and is considered one of the greatest intelligence agents largely due to his ability to get results where nobody else can. He privately attributes this success to a rational mind and extensive [chess] skill that trains him to approach things as though they can be solved. While "stop and think about problems like they were games to be won instead of chores to be blamed on someone else" may seem obvious to people used to thinking like that, it's a major shift for most people.

This... theory of female promiscuity has been championed by the anthropologist Sarah Blaffer Hrdy. Hrdy has described herself as a feminist sociobiologist, and she may take a more than scientific interest in arguing that female primates tend to be "highly competitive... sexually assertive individuals." Then again, male Darwinian may get a certain thrill from saying males are built for lifelong sex-a-thons. Scientific theories spring from many sources. The only question in the end is whether they work.

Robert Wright, The Moral Animal