• Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
New Comment
379 comments, sorted by Click to highlight new comments since: Today at 9:14 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A good rule of thumb might be, “If I added a zero to this number, would the sentence containing it mean something different to me?” If the answer is “no,” maybe the number has no business being in the sentence in the first place.

Randall Munroe on communicating with humans

Related: When (Not) To Use Probabilities:

I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers. (...) you shouldn't go around thinking that, if you translate your gut feeling into "one in a thousand", then, on occasions when you emit these verbal words, the corresponding event will happen around one in a thousand times. Your brain is not so well-calibrated.

This specific topic came up recently in the context of the Large Hadron Collider (...) the speaker actually purported to assign a probability of at least 1 in 1000 that the theory, model, or calculations in the LHC paper were wrong; and a probability of at least 1 in 1000 that, if the theory or model or calculations were wrong, the LHC would destroy the world.

I object to the air of authority given these numbers pulled out of thin air. (...) No matter what other physics papers had been published previously, the authors would have used the same argument and made up the same numerical probabilities

For the opposite claim: If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics:

Remember the Bayes mammogram problem? The correct answer is 7.8%; most doctors (and others) intuitively feel like the answer should be about 80%. So doctors – who are specifically trained in having good intuitive judgment about diseases – are wrong by an order of magnitude. And it “only” being one order of magnitude is not to the doctors’ credit: by changing the numbers in the problem we can make doctors’ answers as wrong as we want.

So the doctors probably would be better off explicitly doing the Bayesian calculation. But suppose some doctor’s internet is down (you have NO IDEA how much doctors secretly rely on the Internet) and she can’t remember the prevalence of breast cancer. If the doctor thinks her guess will be off by less than an order of magnitude, then making up a number and plugging it into Bayes will be more accurate than just using a gut feeling about how likely the test is to work. Even making up numbers based on basic knowledge like “Most women do not have breast cancer at any given time” might be enough to make Bayes Theorem outperform intuitive decision-making in many cases.

I ... (read more)

A detailed reading provides room for these to coexist. Compare: with
I'd agree with Randall Monroe more wholeheartedly if he had said “added a couple of zeros” instead.

A skilled professional I know had to turn down an important freelance assignment because of a recurring commitment to chauffeur her son to a resumé-building “social action” assignment required by his high school. This involved driving the boy for 45 minutes to a community center, cooling her heels while he sorted used clothing for charity, and driving him back—forgoing income which, judiciously donated, could have fed, clothed, and inoculated an African village. The dubious “lessons” of this forced labor as an overqualified ragpicker are that children are entitled to treat their mothers’ time as worth nothing, that you can make the world a better place by destroying economic value, and that the moral worth of an action should be measured by the conspicuousness of the sacrifice rather than the gain to the beneficiary.

Steven Pinker

What about: "using the education system to collect forced labor as a 'lesson' in altruism teaches selfishness and fails at altruism"?
I have to ask, do people ever really believe that these sorts of thing are actually about helping people? I seem to recall my own ragpicking was pitched mainly in terms of how it would help my CV to have done some volunteering. That said, I can't tell if I'm just falling to hindsight bias and reinterpreting past events in favour of my current understanding of altruism, which is why I'm asking. Makes me wonder how things would look if schools had a lesson on effective altruism a few times a year. Surely not everyone would agree, but the waterline might raise a little.

I’m always fascinated by the number of people who proudly build columns, tweets, blog posts or Facebook posts around the same core statement: “I don’t understand how anyone could (oppose legal abortion/support a carbon tax/sympathize with the Palestinians over the Israelis/want to privatize Social Security/insert your pet issue here)." It’s such an interesting statement, because it has three layers of meaning.

The first layer is the literal meaning of the words: I lack the knowledge and understanding to figure this out. But the second, intended meaning is the opposite: I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to such obviously wrong conclusions. And yet, the third, true meaning is actually more like the first: I lack the empathy, moral imagination or analytical skills to attempt even a basic understanding of the people who disagree with me.

In short, “I’m stupid.” Something that few people would ever post so starkly on their Facebook feeds.

--Megan McArdle

While I agree with your actual point, I note with amusement that what's worse is the people who claim they do understand: "I understand that you want to own a gun because it's a penis-substitute", "I understand that you don't want me to own a gun because you live in a fantasy world where there's no crime", "I understand that you're talking about my beauty because you think you own me", "I understand that you complain about people talking about your beauty as a way of boasting about how beautiful you are."... None of these explanations are anywhere near true.

It would be a sign of wisdom if someone actually did post "I'm stupid: I can hardly ever understand the viewpoint of anyone who disagrees with me."

It would be a sign of wisdom if someone actually did post "I'm stupid: I can hardly ever understand the viewpoint of anyone who disagrees with me."

Ah, but would it be, though?

it would probably be some kind of weird signalling game, maybe. On the other hand, posting:"I don't understand how etc etc, please, somebody explain to me the reasoning behind it" would be a good strategy to start debating and opening an avenue to "convert" others

It probably would. Usually a person who writes something like this is looking for an explanation.

I like this and agree that usually or at least often the people making these "I don't understand how anyone could ..." statements aren't interested in actually understanding the people they disagree with. But I also liked Ozy's comment:

I dunno. I feel like "I don't understand how anyone could believe X" is a much, much better position to take on issues than "I know exactly why my opponents disagree with me! It is because they are stupid and evil!" The former at least opens the possibility that your opponents believe things for good reasons that you don't understand -- which is often true!

In general, I believe it is a good thing to admit ignorance when one is actually ignorant, and I am willing to put up with a certain number of dumbass signalling games if it furthers this goal.

Hacker School has a set of "social rules [...] designed to curtail specific behavior we've found to be destructive to a supportive, productive, and fun learning environment." One of them is "no feigning surprise":

The first rule means you shouldn't act surprised when people say they don't know something. This applies to both technical things ("What?! I can't believe you don't know what the stack is!") and non-technical things ("You don't know who RMS is?!"). Feigning surprise has absolutely no social or educational benefit: When people feign surprise, it's usually to make them feel better about themselves and others feel worse. And even when that's not the intention, it's almost always the effect. As you've probably already guessed, this rule is tightly coupled to our belief in the importance of people feeling comfortable saying "I don't know" and "I don't understand."

I think this is a good rule and when I find out someone doesn't know something that I think they "should" already know, I instead try to react as in xkcd 1053 (or by chalking it up to a momentary maladaptive brain activity change on their part, o... (read more)

I don't think that sort of surprise is necessarily feigned. However, I do think it's usually better if that surprise isn't mentioned.

I dunno. I feel like "I don't understand how anyone could believe X" is a much, much better position to take on issues than "I know exactly why my opponents disagree with me! It is because they are stupid and evil!" The former at least opens the possibility that your opponents believe things for good reasons that you don't understand -- which is often true!

I am imagining the following exchange:

"I don't understand how anyone could believe X!"

"Great, the first step to understanding is noticing that you don't understand. Now, let me show you why X is true..."

I suspect that most people saying the first line would not take well to hearing the second.

I suspect the same, but still think "I can't understand why anyone would believe X" is probably better than "people who believe X or say they believe X only do so because they hate [children / freedom / poor people / rich people / black people / white people / this great country of ours / etc.]"

We could charitably translate "I don't understand how anyone could X" as "I notice that my model of people who X is so bad, that if I tried to explain it, I would probably generate a strawman".

Or add a fourth laying: I think that I will rise in status by publically signalling to my facebook friends: "I lack the ability or willingness to attempt even a basic understanding of the people who disagree with me."
People do lots of silly things to signal commitment; the silliness is part of the point. This is a reason initiation rituals are often humiliating, and why members of minor religions often wear distinctive clothing or hairstyles. (I think I got this from this podcast interview [http://www.econtalk.org/archives/2006/10/the_economics_o_7.html] with Larry Iannaccone.) I think posts like the ones to which McArdle is referring, and the beliefs underlying them, are further examples of signaling attire [http://wiki.lesswrong.com/wiki/Belief_as_attire]. "I'm so committed, I'm even blind to whatever could be motivating the other side." A related podcast [http://www.econtalk.org/archives/2013/06/kling_on_the_th.html] is with Arnold Kling on his e-book (which I enjoyed) The Three Languages of Politics. It's about (duh) politics--specifically, American politics--but it also contains an interesting and helpful discussion on seeing things from others' point of view, and explicitly points to commitment-signaling (and its relation to beliefs) as a reason people fail to see eye to eye.
Or, (4), "I keep asking, but they won't say"....
Does that happen?
It does to me.Have you tried getting sense out of an NRx or HBD.er?

Haven't tried it myself, but it seems to work for Scott Alexander

NRx are so bad at communicating their position in language inline can understand that they refer to Scotts ANTI reaction faq to explain it. This is the guy who steelmanned Gene "Timecube" Ray. He has superpowers.
“Reactionary Philosophy In An Enormous, Planet-Sized Nutshell [http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/] ” is where he explain what the NR position is and “The Anti-Reactionary FAQ [http://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/]” is where he explains why he disagrees with it. The former is what neoreactionaries have linked to to explain it.
Yes. That's why I'm somewhat surprised he seems to interpret “reptilian aliens [http://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/] ” literally.

There no reason to use those nonstandard abbreviations. Neither of them are in Urban dictionary.

NRx is probably neoreactionism but doesn't make it into the first 10 Google results. HBD.er in that spelling seems to be wrong as HBD'er is found when you Google it.

Yes, what they say frequently makes a lot more sense than the mainstream position on the issue in question.
I completely disagree. Their grasp of politics is largely based on meta-contrarianism, and has failed to "snap back" into basing one's views on a positive program whose goodness and rationality can be argued for with evidence.
Huh? HBD'ers are making observations about the world, they do not have a "positive program". As for NRx, they do have a positive program do use evidence to argue for it, see the NRx thread [http://lesswrong.com/r/discussion/lw/kxb/nrx_vs_prog_assumptions_locating_the_sources_of/] and the various blogs linked there for some examples.
Makes sense to whom? They are capable of making converts, so they are capable of making sense to some people...people who 90% agree with them already. It's called dog whistle [http://en.wikipedia.org/wiki/Dog_whistle]. Not being hearable by some people is built in.
Bracket neoreaction for the time being. I get that you disagree with HBD positions, but do you literally have trouble comprehending their meaning?
Yes. One time someone was moaning about imigrants from countries that don't have a long history of Democracy, and genuinely thought he meant eastern Europeans. He didn't, because they are white Christians and he doesn't object to white Christians. So to understand who he is objecting to, I have to apply a mental filter he has and I don't.
Hmmm... let's try filling something else in there. "I don't understand how anyone could support ISIS/Bosnian genocide/North Darfur." While I think a person is indeed more effective at life for being able to perform the cognitive contortions necessary to bend their way into the mindset of a murderous totalitarian (without actually believing what they're understanding), I don't consider normal people lacking for their failure to understand refined murderous evil of the particularly uncommon kind -- any more than I expect them to understand the appeal of furry fandom (which I feel a bit guilty for picking out as the canonical Ridiculously Uncommon Weird Thing).
You don't have to share a taste for, or approval of "...refined murderous evil of the particularly uncommon kind..." It can be explained as a reaction to events or conditions, and history is full of examples. HOWEVER. We have this language that we share, and it signifies. I understand that a rapist has mental instability and other mental health issues that cause him to act not in accordance with common perceptions of minimum human decency. But I can't say out loud, "I understand why some men rape women." It's an example of a truth that is too dangerous to say because emotions prevent others from hearing it.
You can (and did) say that, you just can't say it on Twitter with no context without causing people to yell at you. ETA: you like language? Gricean maxims. [http://www.sas.upenn.edu/~haroldfs/dravling/grice.html]
Now repeat the same statement, only instead of abortions and carbon taxes, substitute the words "believe in homeopathy". (Creationism also works.) People do say that--yet it doesn't mean any of the things the quote claims it means (at least not in a nontrivial sense).
Then what does it mean in those cases? Because the only ones I can think of are the three Megan described. If you mean "I can't imagine how anyone could be so stupid as to believe in homeopathy/creationism", which is my best guess for what you mean, that's a special of the second meaning.
"I don't understand how someone could believe X" typically means that the speaker doesn't understand how someone could believe in X based on good reasoning. Understanding how stupidity led someone to believe X doesn't count. Normal conversation cannot be parsed literally. It is literally true that understanding how someone incorrectly believes X is a subclass of understanding how someone believes in X; but it's not what those words typically connote.
Most people who say: "I don't understand how someone could believe X" would fail a reverse Turing test that position. They often literally don't understand how someone comes to believe X.
I don't think that applies here. Your addition "based on good reasoning" is not a non-literal meaning, but a filling in of omitted detail. Gricean implicature is not non-literality, and the addition does not take the example outside McArdle's analysis. As always, confusion is a property of the confused person, not of the thing outside themselves that they are confused about. If a person says they cannot understand how anyone could etc., that is, indeed, literally true. That person cannot understand the phenomenon; that is their problem. Yet their intended implication, which McArdle is pointing out does not follow, is that all of the problem is in the other person. Even if the other person is in error, how can one engage with them from the position of "I cannot understand how etc."? The words are an act of disengagement, behind a smokescreen that McArdle blows away..
Sure it is. The qualifier changes the meaning of the statement. By definition, if the sentence lacks the qualifier but is to be interpreted as if it has one, it is to be interpreted differently than its literal words. Having to be interpreted as containing detail that is not explicitly written is a type of non-literalness. No, it's not. I understand how someone can believe in creationism: they either misunderstand science (probably due to religious bias) or don't actually believe science works at all when it conflicts with religion. Saying "I don't understand how someone can believe in creationism" is literally false--I do understand how. What it means is "I don't understand how someone can correctly believe in creationism." I understand how someone can believe in creationism, but my understanding involves the believer making mistakes. The statement communicates that I don't know of a reason other than making mistakes, not that I don't know any reason at all. Because "I don't understand how" is synonymous, in ordinary conversation, with "the other person appears to be in error." It does not mean that I literally don't understand, but rather that I understand it as an error, so it is irrelevant that literally not understanding is an act of disengagement.
Now I just thought of this, so maybe I'm wrong, but I don't think "I don't understand how someone can think X" is really meant as any sort of piece of reasonable logic, or a substitution for one. I suspect this is merely the sort of stuff people come up with when made to think about it. Rather, "I don't understand how..." is an appeal to the built in expectation [http://lesswrong.com/lw/kg/expecting_short_inferential_distances/] that things make obvious sense. If I want to claim that "what you're saying is nontribal and I have nothing to do with it", stating that you're not making sense to me works whether or not I can actually follow your reasoning. Since if you really were not making sense to me with minimum effort on my part, this would imply bad things about you and what you're saying. It's just a rejection that makes no sense if you think about it, but it's not meant to be thought about - it's really closer to "la la la I am not listening to you". Am I making sense?
This is close, but I don't think it captures everything. I used the examples of creationism and homeopathy because they are unusual examples where there isn't room for reasonable disagreement. Every person who believes in one of those does so because of bias, ignorance, or error. This disentangles the question of "what is meant by the statement" and "why would anyone want to say what is meant by the statement". You have correctly identified why, for most topics, someone would want to say such a thing. Normally, "there's no room for reasonable disagreement; you're just wrong" is indeed used as a tribal membership indicator. But the statement doesn't mean "what you're saying is nontribal", it's just that legitimate, nontribal, reasons to say "you are just wrong" are rare.
Well that's true for every false belief anyone has. So what's so special about those examples? You say "there isn't room for reasonable disagreement", which taken literally is just another way of phrasing "I don't understand how anyone could believe X". In any case, could you expand on what you mean by "not room for reasonable disagreement" since in context it appears to mean "all the tribes present agree with it".
You're being literal again. Every person who believes in one of those primarily does so because of major bias, ignorance, or error. You can't just distrust a single source you should have trusted, or make a single bad calculation, and end up believing in creationism or homeopathy. Your belief-finding process has to contain fundamental flaws for that. And "it has three sides" is just another way of phrasing "it is a triangle", but I can still explain what a triangle is by describing it as something with three sides. If it wasn't synonymous, it wouldn't be an explanation. (Actually, it's not quite synonymous, for the same reason that the original statement wasn't correct: if you're taking it literally, "I don't understand how anyone could believe X" excludes cases where you understand that someone makes a mistake, and "there isn't room for reasonable disagreement" includes such cases.) You can describe anything which is believed by some people and not others in terms of tribes believing it. But not all such descriptions are equally useful; if the tribes fall into categories, it is better to specify the categories.
You don't even need to do a bad calculation to believe in homeopathy. You just need to be in a social environment where everyone believes in homeopathy and not care enough about the issue to invest more effort into it. If you simply follow the rule: If I live in a Western country it makes sense to trust the official government health ministry when it publishes information about health issues, you might come away with believing in homeopathy if you happen to live in Switzerland. There are a lot of decent heuristics that can leave someone with that belief even if the belief is wrong.
If you're in a social environment where everyone believes in it, then you have more than just a single source.
Non-literality isn't a get-out-of-your-words-free card. There is a clear difference between saying "you appear to be in error" and "I can't understand how anyone could think that", and the difference is clearly expressed by the literal meanings of those words. And to explicate "I don't understand etc." with "Of course I do understand how you could think that, it's because you're ignorant or stupid" is not an improvement [http://lesswrong.com/lw/kwd/rationality_quotes_september_2014/b9wf] .
Non-literalness is a get-out-of-your-words-free card when the words are normally used in conversation, by English speakers in general, to mean something non-literal. Yes, if you just invented the non-literal meaning yourself, there are limits to how far from the literal meaning you can be and still expect to be understood, but these limits do not apply when the non-literal meaning is already established usage. The original quote gives the intended meaning as "I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to..." In other words, the original rationality quote explicitly excludes the possibility of "I understand you believe it because you're ignorant or stupid". It misinterprets the statement as literally claiming that you don't understand in any way whatsoever. The point is that the quote is a bad rationality quote because it makes a misinterpretation. Whether the statement that it misinterprets is itself a good thing to say is irrelevant to the question of whether it is being misinterpreted.
Established by whom? You are the one claiming that These two expressions mean very different things. Notice that I am claiming that you are in error, but not saying, figuratively or literally, that I cannot understand how you could possibly think that. That is not how figurative language works. I could expand on that at length, but I don't think it's worth it at this point.
"A is synonymous with B" doesn't mean "every time someone said B, they also said A". "You've made more mistakes than a zebra has stripes" is also synonymous with "you're in error" and you clearly didn't say that, either. (Of course, "is synonymous with" means "makes the same assertion about the main topic", not "is identical in all ways".)
Indeed. "You've made more mistakes than a zebra has stripes" is therefore not synonymous with "you're in error". The former implies the latter, but the latter does not imply even the figurative sense of the former. If what someone is actually thinking when they say "you've made more mistakes than a zebra has stripes" is no more than "you're in error", then they have used the wrong words to express their thought.
The art of condescension is subtle and nuanced. "I'm always fascinated by..." can be sincere or not--when it is not, it is a variation on, "It never ceases to amaze me how..." If you were across the table from me, Alejandro, I could tell by your eyes. Most FB posts, tweets, blog posts and comments on magazine and newspaper articles are as bad or worse than what is described here. Rants masquerading as comments. That's why I like this venue here at LessWrong. Commenters actually trying to get more clarity, trying to make sure they understand, trying to make it clear with sincerely constructive criticism that they believe a better argument could be stated. If only it could be spread around the web-o-spehre. Virally.

Always go to other people's funerals; otherwise they won't go to yours.

Yogi Berra, on Timeless Decision Theory.

If only I cared about who goes to my funeral.

Your younger nerd takes offense quickly when someone near him begins to utter declarative sentences, because he reads into it an assertion that he, the nerd, does not already know the information being imparted. But your older nerd has more self-confidence, and besides, understands that frequently people need to think out loud. And highly advanced nerds will furthermore understand that uttering declarative sentences whose contents are already known to all present is part of the social process of making conversation and therefore should not be construed as aggression under any circumstances.

-- Cryptonomicon by Neal Stephenson

Neal Stephenson is good as a sci-fi writer, but I think he's almost as good as an ethnographer of nerds. Pretty much everything he writes has something like this in it, and most of it is spot-on. On the other hand, he does occasionally succumb to a sort of mild geek-supremacist streak (best observed in Anathem, unless you're one of the six people besides me who were obsessed enough to read In The Beginning... Was The Command Line).

Of course I read In the Beginning was the Command Line. The supply of writing from witty bearded men talking to you about cool things isn't infinite, you know.

It's a well-known essay. It even has a Wikipedia article [http://en.wikipedia.org/wiki/In_the_Beginning..._Was_the_Command_Line]. I just re-read, well, re-skimmed it. Ah, the nostalgia. It's very dated now. 15 years on, its prediction that proprietary operating systems would lose out to free software has completely failed to come true. Linux still ticks over, great for running servers and signalling hacker cred, but if it's so great, why isn't everyone using it? At most it's one of three major platforms: Windows, OSX, and Linux. Or two out of five if you add iOS and Android (which is based on Linux). OS domination by Linux is no closer, and although there's a billion people [http://nypost.com/2014/06/26/google-shows-off-android-auto-smartwatches/] using Android devices, command lines are not part of their experience. Stephenson wrote his essay (and I read it) before Apple switched to Unix in the form of OSX, but you can't really say that OSX is Unix plus a GUI, rather OSX is an operating system that includes a Unix interface. In other words, exactly what Stephenson asked for: BeOS [http://en.wikipedia.org/wiki/BeOS] failed, and OSX appeared three years after Stephenson's essay. I wonder what he thinks of them now—both OSX and In the Beginning.
That's is a debatable point :-) UNIX can be defined in many ways -- historically (what did the codebase evolve from), philosophically [http://en.wikipedia.org/wiki/Unix_philosophy], technically (monolithic kernel, etc.), practically (availability and free access to the usual toolchains), etc. I don't like OSX and Apple in general because I really don't like walled gardens and Apple operates on the "my way or the highway" principle. I generally run Windows for Office, Photoshop, games, etc. and Linux, nowadays usually Ubuntu, for heavy lifting. I am also a big fan of VMs which make a lot of things very convenient and, in particular, free you from having to make the big choice of the OS.
FYI: The 'you can't run this untrusted code' dialog is easy to get around.
I suspect I would be able to bludgeon OSX into submission but I don't see any reasons why I should bother. I don't have to work with Macs and am content not to.
Can't speak for Lumifer, but I was more annoyed by the fact that (the version I got of) OSX doesn't ship with a working developer toolchain, and that getting one requires either jumping through Apple's hoops and signing up for a paid developer account, or doing a lot of sketchy stuff to the guts of the OS. This on a POSIX-compliant system! Cygwin is less of a pain, and it's purely a bolt-on framework. (ETA: This is probably an exaggeration or an unusual problem; see below.) It was particularly frustrating in my case because of versioning issues, but those wouldn't have applied to most people. Or to me if I'd been prompt, which I hadn't.
You do not need to pay to get the developer tools. I have never paid for a compiler*, and I develop frequently. *(other than LabView, which I didn't personally pay for but my labs did, and is definitely not part of XCode)
After some Googling, it seems that version problems may have been more central than I'd recalled. Xcode is free and includes command-line tools, but looking at it brings up vague memories of incompatibility with my OS at the time. The Apple developer website allows direct download of those tools but also requires a paid signup. And apparently trying to invoke gcc or the like from the command line should have brought up an update option, but that definitely didn't happen. Perhaps it wasn't an option in an OS build as old as mine, although it wouldn't have been older than 2009 or 2010. (I eventually just threw up my hands and installed an Ubuntu virt through Parallels.) So, probably less severe than I'd thought, but the basic problem remains: violating Apple's assumptions is a bit like being a gazelle wending your way back to a familiar watering hole only to get splattered by a Hummer howling down the six-lane highway that's since been built in front of it.
You can get it through the app store, which means you need an account with Apple, but you do not need to pay to get this account. It really is free. I would note that violating any operating system's assumptions makes bad things happen.
Yeah, I bought a hard copy in a non-technical bookstore. "Six people" was a joke based on its, er, specialized audience compared to the lines of Snow Crash; in terms of absolute numbers it's probably less obscure than, say, Zodiac. If memory serves, Stephenson came out in favor of OSX a couple years after its release, comparing it to BeOS in the context of his essay. I can't find the cite now, though. Speaking for myself, I find OSX's ability to transition more-or-less seamlessly between GUI and command-line modes appealing, but its walled developer garden unspeakably annoying.
With some googling, I found this [http://garote.bdmonkeys.net/commandline/index.html], a version of ITBWTCL annotated (by someone else) five years later, including a quote from Stephenson, saying that the essay "is now badly obsolete and probably needs a thorough revision". The quote is quoted in many places, but the only link I turned up for it on his own website was dead [http://www.nealstephenson.com/author_juvenilia.htm] (not on the Wayback Machine either).
I think everyone who belongs to a certain age group and runs Linux has read In the Beginning was the Command Line. And yes, that's me admitting to having read it, and kinda believed the arguments at one point.
You say that like it's a bad thing.

You say that like it's a bad thing.

A raise is only a raise for thirty days; after that, it’s just your salary.

-- David Russo

I don't understand what he wanted to say by this. Could somebody explain?

Instead of giving your employees $100 raise, give them $1200 bonus once in a year. It's the same money, but it will make them more happy, because they will keep noticing it for years.

It'll also be easier to reduce a bonus (because of poor performance on the part of the employee or company) than it will be to reduce a salary.

I say give them smaller raises more frequently. After the first annual bonus, it becomes expected.
Intermittent reward for the win.

I speaks to anchoring and evaluating incentives relative to an expected level.

Basically, receiving a raise is seen as a good thing because you are getting more money than a month ago (anchor). But after a while you will be getting the same amount of money as a month ago (the anchor has moved) so there is no cause for joy.

While you are getting a raise you might be more motivated to work. However after a while your new salary becomes new salary and you would need a new raise to get additional motivation.

How to compose a successful critical commentary:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.

  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

  3. You should mention anything you have learned from your target.

  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

D.C. Dennett, Intuition Pumps and Other Tools for Thinking. Dennett himself is summarising Anatol Rapoport.

I don't see what to do about gaps in arguments. Gaps aren't random. There are little gaps where the original authors have chosen to use their limited word count on other, more delicate, parts of their argument, confident that charitable readers will be happy to fill the small gaps themselves in the obvious ways. There are big gaps where the authors have gone the other way, tip toeing around the weakest points in their argument. Perhaps they hope no-one else will notice. Perhaps they are in denial. Perhaps there are issues with the clarity of the logical structure that make it easy to whiz by the gap without noticing it.

The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make. Worse, big gaps are seldom accidental. They are there because they are hard to fill. Indeed it might be the difficulty of filling the gap that made you join the other side of the de... (read more)

With regards to your example, you try to fix the gap between "consumption will increase" and "that will be a bad thing as a whole" by claiming little good use and much bad use. But I don't think that's the strongest way to bridge that gap. Rather, I'd suggest that the good use has negligible positive utility - just another way to relax on a Friday night, when there are already plenty of ways to relax on a Friday night, so how much utility does adding another one really give you? - while bad use has significant negative utility (here I may take the chance to sketch the verbal image of a bright young doctor dropping out of university due to bad use). Then I can claim that even if good-use increases by a few orders of magnitude more than bad-use, the net result is nonetheless negative, because bad use is just that terrible; that the negative effects of a single bad-user outweigh the positive effects of a thousand good-users. -------------------------------------------------------------------------------- As to your main point - what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking. Or to go and look through his writings, and see whether or not he addresses precisely that point. Or to go to a friend (preferably also an intelligent debator) and asking for his best effort to fill the gap, in the hope that it will be a better effort.
Entirely within the example, not pertaining to rationality per se, and I'm not sure you even hold the position you were arguing about: 1) good use is not restricted to relaxing on a Friday. It also includes effective pain relief with minimal and sometimes helpful side-effects. Medical marijuana use may be used as a cover for recreational use but it is also very real in itself. 2) a young doctor dropping out of university is comparable and perhaps lesser disutility to getting sent to prison. You'd have to get a lot of doctors dropping out to make legalization worse than the way things stand now.
My actual position on the medical marijuana issue is best summarised as "I don't know enough to have developed a firm opinion either way". This also means that I don't really know enough to properly debate on the issue, unfortunately. Though, looking it up, I see there's a bill currently going through parliament in my part of the world that - if it passes - would legalise it for medicinal use.
Have you read “Marijuana: Much More Than You Wanted To Know [http://slatestarcodex.com/2014/01/05/marijuana-much-more-than-you-wanted-to-know/] ” on Slate Star Codex?
No, I have not.
So, you go back to the person you're going to argue against, before you start the argument, and ask them about the big gap in their original position? That seems like it could carry the risk of kicking off the argument a little early.
"Pardon me, sir, but I don't quite understand how you went from Step A to Step C. Do you think you could possibly explain it in a little more detail?" Accompanied, of course, by a very polite "Thank you" if they make the attempt to do so. Unless someone is going to vehemently lash out at any attempt to politely discuss his position, he's likely to either at least make an attempt (whether by providing a new explanation or directing you to the location of a pre-written one), or to plead lack of time (in which case you're no worse off than before). Most of the time, he'll have some sort of explanation, that he considered inappropriate to include in the original statement (either because it is "obvious", or because the explanation is rather long and distracting and is beyond the scope of the original essay). Mind you, his explanation might be even more thin and unconvincing than the best you could come up with...
I think the idea was, 'when you've gotten to this point, that's when your pre-discussion period is over, and it is time to begin asking questions'. And yes, it is often a good idea to ask questions before taking a position!
Quote: "The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make." Just no. An argument is an argument. It is complete or not. If there is a gap in the argument, in most cases there are two eventualities: (a) the leap is a true one assuming what others would find obvious, or (b) either an honest error in the argument or an attempt to cover up a flaw in the argument. If there is a way to "fill in" the argument that is the only way it could be filled in, you are justified in doing so, while pointing out that you are doing so. If either of the (b) cases hold, however, you must still point them out, in order to maintain your own credibility. Especially if you are refuting an argument, the gap should be addressed and not glossed over. You might treat the (b) situations differently, perhaps politely pointing out that the original author made an error there, or perhaps not-so-politely pointing out that something is amiss. But you still address the issue. If you do not, the onus is now on you, because you have then "adopted" that incomplete or erroneous argument. For example: your own example argument has a rather huge and glaring hole in it: "bad-use will increase a lot and good-use will increase a little". However, history and modern examples both show this to be false: in the real world, decriminalization has increased bad-use only slightly if at all, and good-use more. (See the paper "The Portugal Experiment" for one good example.) Was there any problem there with my treatment of this rather gaping "gap" in your argument?

A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely though so many voyages and weathered so many storms, that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such a way he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her depature with a light heart, and benevolent wishes for the success of the exiles in... (read more)

An interesting quote. It essentially puts forward the "reasonable person" legal theory [http://en.wikipedia.org/wiki/Reasonable_person]. But that's not what's interesting about it. The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?
I realize your questions may be rhetorical, but I'm going to attempt an answer anyways, because it illustrates a point: The morality of the shipowner's actions do not depend on the realized outcomes: It can only depend on his prior beliefs about the probability of the outcomes, and on the utility function that he uses to evaluate them. If we insisted on making morality conditional on the future, causality is broken: It will be impossible for any ethical agent to make use of such ethics as a decision theory. The problem here is that the Shipowner's "sincerely held beliefs" are not identical to his genuine extrapolated prior. It is not stated in the text, but I think he is able to convince himself about "the soundness of the ship" only by ignoring degrees of belief: If he was a proper Bayesian, he would have realized that having "doubts" and not updating your beliefs is not logically consistent In any decision theory that is usable by agents making decisions in real time, the morality of his action is determined either at the time he allowed the ship to sail, or at the time he allowed his prior to get corrupted. I personally believe the latter. This quotation illustrates why I see rationality as a moral obligation, even when it feels like a memetic plague.
I am not sure -- I see your point, but completely ignoring the actual outcome seems iffy to me. There are, of course, many different ways of judging morality and, empirically, a lot of them do care about realized outcomes. I don't know what a "genuine extrapolated prior" is. Well, behaving according to the "reasonable person" standard is a legal obligation :-)
That's because we live in a world where people's inner states are not apparent, perhaps not even to themselves. So we revert to (a) what would a reasonable person believe, (b) what actually happened. The latter is unfortunate in that it condemns many who are merely morally unlucky and acquits many who are merely morally lucky, but that's life. The actual bad outcomes serve as "blameable moments". What can I say - it's not great, but better than speculating on other people's psychological states. In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory of the ethics of belief; as it is I think it correct but not entirely actionable. That which would be arrived at by a reasonable person (not necessarily a Bayesian calculator, but somebody not actually self-deceptive) updating on the same evidence. A related issue is sincerity; Clifford says the shipowner is sincere in his beliefs, but I tend to think in such cases there is usually a belief/alief mismatch. I love this passage from Clifford and I can't believe it wasn't posted here before. By the way, William James mounted a critique of Clifford's views in an address you can read here [http://educ.jmu.edu/~omearawm/ph101willtobelieve.html]; I encourage you to do so as James presents some cases that are interesting to think about if you (like me) largely agree with Clifford.
That's not self-evident to me. First, in this particular case as you yourself note, "Clifford says the shipowner is sincere in his belief". Second, in general, what are you going to do about, basically, stupid people who quite sincerely do not anticipate the consequences of their actions? That would be a posterior, not a prior.
I think Clifford was wrong to say the shipowner was sincere in his belief. In the situation he describes, the belief is insincere - indeed such situations define what I think "insincere belief" ought to mean. Good question. Ought implies can, so in extreme cases I'd consider that to diminish their culpability. For less extreme cases - heh, I had never thought about it before, but I think the "reasonable man" standard is implicitly IQ-normalized. :) Sure.
This is called fighting the hypothetical [http://lesswrong.com/lw/bwp/please_dont_fight_the_hypothetical/]. While that may be so, the Clifford approach relying on the subpoenaed mental states relies on mental states and not on any external standard (including the one called "resonable person").
I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.
Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it. Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime. These are legal concepts, but I think they can reasonably be applied to ethics as well. Intentions and consequences both matter. If the emigrants do not die, he is not guilty of their deaths. He is still morally at fault for sending to sea a ship he knew was unseaworthy. His inaction in reckless disregard for their lives can quite reasonably be judged a crime.
That is just not true. The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things. You are mistaken. Suppressing your own doubts is not actus reus -- you need an action in physical reality. And, legally, there is a LOT of difference between an act and an omission, failing to act.
The author said: and more, which you have already read. This is clear enough to me. In this case, an inaction. In general there is, but not when the person has a duty to perform an action, knows it is required, knows the consequences of not doing it, and does not. That is the situation presented.
This is not the whole story. In the quote you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world. In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him. And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)
The next passage confirms that this is the author's interpretation as well: And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents. The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.
Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection. That's a pretty good example of the Fallacy of Gray [http://lesswrong.com/lw/mm/the_fallacy_of_gray/] right there.
How do you know? Especially since falsely holding that belief would be an example.
Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.
It's not quite clear to me that the judgments being made here are solely about the owner's thought processes, though I agree that facts about behavior and thought processes are intermingled in this narrative in such a way as to make it unclear what conclusions are based on which facts. Still... the owner had doubts suggested about the ship's seaworthiness, we're told, and this presumably is a fact about events in the world. The generally agreed-upon credibility of the sources of those suggestions is presumably also something that could be investigated without access to the owner's thoughts. Further, we can confirm that the owner didn't overhaul the ship, for example, nor retain the services of trained inspectors to determine the ship's seaworthiness (or, at least, we have no evidence that he did so, in situations where evidence would be expected if he had). All of those are facts about behavior. Are those behaviors sufficient to hold the owner liable for the death of the sailors? Perhaps not; perhaps without the benefit of narrative omniscience we'd give the owner the benefit of the doubt. But... so what? In this case, we are being given additional data. In this case we know the owner's thought process, through the miracle of narrative. You seem to be trying to suggest, through implication and leading questions, that using that additional information in making a judgment in this case is dangerous... perhaps because we might then be tempted to make judgments in real-world cases as if we knew the owner's thoughts, which we don't. And, well, I agree that to make judgments in real-world cases as if we knew someone's thoughts is problematic... though sometimes not doing so is also problematic. Anyway, to answer your question: given the data provided above I consider the shipowner negligent, regardless of whether the ship arrived safely at its destination, or whether it was destroyed by some force no ship could survive. Do you disagree?
In absence of applicable regulations I think the veil of ignorance of sorts can help here. Would the shipowner make the same decision were he or his family one of the emigrants? What if it was some precious irreplaceable cargo on it? What if it was regular cargo but not fully insured? If the decision without the veil is significantly difference from the one with, then one can consider him "verily guilty", without worrying about his thoughts overmuch.
Well, yes, I agree, but I'm not sure how that helps. We're now replacing facts about his thoughts (which the story provides us) with speculations about what he might have done in various possible worlds (which seem reasonably easy to infer, either from what we're told about his thoughts, or from our experience with human nature, but are hardly directly observable). How does this improve matters?
I don't think they are pure speculations. This is not the shipowner's first launch, so the speculations over possible worlds can be approximated by observations over past decisions.
(nods) As I say, reasonably easy to infer. But I guess I'm still in the same place: this narrative is telling us the shipowner's thoughts. I'm judging the shipowner accordingly. That being said, if we insist on instead judging a similar case where we lack that knowledge... yeah, I dunno. What conclusion would you arrive at from a Rawlsian analysis and does it differ from a common-sense imputation of motive? I mean, in general, "someone credibly suggested the ship might be unseaworthy and Sam took no steps to investigate that possibility" sounds like negligence to me even in the absence of Rawlsian analysis.
No, I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality. Keep in mind that this parable was written specifically to make you come to this conclusion :-) But yes, I disagree. I consider the data above to be insufficient to come to any conclusions about negligence.
Mental processes inside someone's mind actually happen in physical reality. Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.
So, I disagreed with this claim the first time you made it, since the grounds cited combine both facts about the shipowners thoughts and facts about physical reality (which I listed). You evidently find that objection so uncompelling as to not even be worth addressing, but I don't understand why. If you chose to unpack your reasons, I'd be interested. But, again: even if it's true, so what? If we have access to the mental processes inside someone's mind, as we do in this example, why shouldn't we use that data in determining guilt?
I read the story as asserting three facts about the physical reality: the ship was old, the ship was not overhauled, the ship sank in the middle of the ocean. I don't think these facts lead to the conclusion of negligence. But we don't. We're talking about the world in which we live. I would presume that the morality in the world of telepaths would be quite different. Don't do this [http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/].
When judging this story, we do. We know what was going on in this shipowner's mind, because the story tells us. I'm not generalizing. I'm making a claim about my judgment of this specific case, based on the facts we're given about it, which include facts about the shipowner's thoughts. What's wrong with that? As I said initially... I can see arguing that if we allow ourselves to judge this (fictional) situation based on the facts presented, we might then be tempted to judge other (importantly different) situations as if we knew analogous facts, when we don't. And I agree that doing so would be silly. But to ignore the data we're given in this case because in a similar real-world situation we wouldn't have that data seems equally silly.

Alex Jordan, a grad student at Stanford, came up with the idea of asking people to make moral judgments while he secretly tripped their disgust alarms. He stood at a pedestrian intersection on the Stanford campus and asked passersby to fill out a short survey. It asked people to make judgments about four controversial issues, such as marriage between first cousins, or a film studio’s decision to release a documentary with a director who had tricked some people into being interviewed. Alex stood right next to a trash can he had emptied. Before he recruited each subject, he put a new plastic liner into the metal can. Before half of the people walked up (and before they could see him), he sprayed the fart spray twice into the bag, which “perfumed” the whole intersection for a few minutes. Before other recruitments, he left the empty bag unsprayed. Sure enough, people made harsher judgments when they were breathing in foul air

-- The Righteous Mind Ch 3, Jonathan Haidt

I wonder if anyone who needs to make important judgments a lot makes an actual effort to maintain affective hygiene. It seems like a really good idea, but poor signalling.

What goes unsaid eventually goes unthought.

Steve Sailer

Alternatively: Paul Graham [http://www.paulgraham.com/say.html]
Paul Graham's quote is about a way to fight the trend Sailer describes, unfortunately that trend frequently ends up winning.

Often, one of these CEOs will operate in a way inconsistent with Thorndike's major thesis and yet he'll end up praising the CEO anyway. In poker, we'd call this the "won, didn't it?" fallacy-- judging a process by the specific, short-term result accomplished rather than examining the long-term result of multiple iterations of the process over time.

This Amazon.com review.

A Verb Called Self
I am the playing, but not the pause.
I am the effect, but not the cause.
I am the living, but not the cells.
I am the ringing, but not the bells.
I am the animal, but not the meat.
I am the walking, but not the feet.
I am the pattern, but not the clothes.
I am the smelling, but not the rose.
I am the waves, but not the sea
Whatever my substrate, my me is still me.
I am the sparks in the dark that exist as a dream -
I am the process, but not the machine.

~Jennifer Diane "Chatoyance" Reitz, Friendship Is Optimal: Caelum Est Conterrens

A couple of those (specifically lines 2, 5, and 11) should probably be "I'm" rather than "I am" to preserve the rhythm.
I disagree with you on 5; it works better as I am than I'm. EDIT: Also, 9 works better as "I'm"
Really? Huh. I'm counting from "I am the playing..." = 1, and I really can't read line 5 with "I am" so it scans - I keep stumbling over "animal".
I'm counting the same way. With stress in italics, sounds much better to me than I should probably note that I read most of the lines with an approximately syllable-sized pause before 'but', and the animal line without that pause. The poem feels to me like it's written mainly in dactylls with some trochees and a final stressed syllable on each line. Compare with While I'm at this, how I read lines 9-11 as written Which definitely break up the rhythm of the first half entirely, which is probably intentional, but particularly line 9 is awkward, which I didn't catch the first pass. If I was trying to keep that rhythm, I'd read it this way: And be unhappy that "What'ver" is no longer reasonable English, even for poetry.
Perhaps you want whate'er? It sounds a bit archaic, but not wrong.
I don't know much about historical stress patterns, but when I pronounce "whate'er", the stress moves to the second syllable (wut-air), which doesn't improve things.

He who knows only his own side of the case, knows little of that.

J.S. Mill

Right, her side of the story is pretty illuminating.

A conversation between me and my 7-year-old cousin:

Her: "do you believe in God?"

Me: "I don't, do you?"

Her: "I used to but, then I never really saw any proof, like miracles or good people getting saved from mean people and stuff. But I do believe in the Tooth Fairy, because ever time I put a tooth under my pillow, I get money out in the morning."

Definitely getting her HPMOR for her 10th birthday :)
Interesting that she seems to mentally classify God and the tooth fairy in the same category.
Well, she's only 7.
I'm not sure what you mean. I personally have a mental category of "mythical beings that don't exist but some people believe exist", which includes God, the tooth fairy, Santa, unicorns, etc. This girl appears to have the same mental category, even though she believes in God but doesn't believe in the tooth fairy.

"I mean, my lord Salvara, that your own expectations have been used against you. You have a keen sense for men of business, surely. You've grown your family fortune several times over in your brief time handling it. Therefore, a man who wished to snare you in some scheme could do nothing wiser than to act the consummate man of business. To deliberately manifest all your expectations. To show you exactly what you expected and desired to see."

"It seems to me that if I accept your argument," the don said slowly, "then the self-evident truth of any legitimate thing could be taken as grounds for its falseness. I say Lukas Fehrwight is a merchant of Emberlain because he shows the signs of being so; you say those same signs are what prove him counterfeit. I need more sensible evidence than this."

-- Scott Lynch, "The Lies of Locke Lamora", page 150.

If I remember the book correctly, this part comes from a scene where Locke Lamora is attempting to pull a double con on the speaking character by both impersonating the merchant and a spy/internal security agent (Salvara) investigating the merchant. So while the don's character acts "rationally" here - he is doing so while being deceived because of his assumptions - showing the very same error again

It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition.

On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and s

... (read more)
The rest of the article is also well worth the read.

It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet.

Andrew Gelman

I would like this quote more if instead of “has a positive utility for getting” it said “wants to get”.
The context is specifically a description of the theory of utility and how it is inconsistent with the preferences people actually exhibit.
[-][anonymous]8y 19

When I visited Dieter Zeh and his group in Heidelberg in 1996, I was struck by how few accolades he’d gotten for his hugely important discovery of decoherence. Indeed, his curmudgeonly colleagues in the Heidelberg Physics Department had largely dismissed his work as too philosophical, even though their department was located on “Philosopher Street.” His group meetings had been moved to a church building, and I was astonished to learn that the only funding that he’d been able to get to write the first-ever book on decoherence came from the German Lutheran Church.

This really drove home to me that Hugh Everett was no exception: studying the foundations of physics isn’t a recipe for glamour and fame. It’s more like art: the best reason to do it is because you love it. Only a small minority of my physics colleagues choose to work on the really big questions, and when I meet them, I feel a real kinship. I imagine that a group of friends who’ve passed up on lucrative career options to become poets might feel a similar bond, knowing that they’re all in it not for the money but for the intellectual adventure.

-- Max Tegmark, Our Mathematical Universe, Chapter 8. The Level III Multiverse, "The Joys of Getting Scooped"

My transformation begins with me getting tired of my own bullshit.

Skeletor is Love

People who are often misunderstood: 6% geniuses; 94% garden-variety nonsense-spouters

-- David Malki !

I know that. People are so lame. Not me though. I am one of the genius ones.

People who often misunderstand others: 6% of geniuses, 94% of garden-variety nonsense-spouters.

A heuristic shouldn't be the "least wrong" among all possible rules; it should be the least harmful if wrong.

Nassim N. Taleb

Opportunity costs? I would say it should be the one with best expected returns. But I guess Taleb thinks the possibility of a very bad black swan overrides everything else - or at least that's what I gathered from his recent crusade against GMOs.

I would say it should be the one with best expected returns.

True, but not as easy to follow as Taleb's advice. In the extreme we could replace every piece of advice with "maximize your utility".

Not quite, as most people are risk-averse and care about the width about the distribution of the expected returns, not only about its mean.
If you measure "returns" in utility (rather than dollars, root mean squared error, lives, whatever) then the definition of utility (and in particular the typical pattern of decreasing marginal utility) takes care of risk aversion. But since nobody measures returns in utility your advice is good.
I am not sure about that. If you're risk-neutral in utility, you should be indifferent between two fair-coin bets: (1) heads 9 utils, tails 11 utils; (2) heads -90 utils, tails 110 utils. Are you?
Yes, I am, by definition, because the util rewards, being in utilons, must factor in everything I care about, including the potential regret. Unless your bets don't cash out as and If it means something else, then the precise wording could make the decision different.
It's not quite the potential regret that is the issue, it is the degree of uncertainty, aka risk. Do you happen to have any links to a coherent theory of utilons?
I'm pretty strongly cribbing off the end of So8res's MMEU rejection [http://lesswrong.com/lw/kuj/knightian_uncertainty_a_rejection_of_the_mmeu_rule/] . Part of what I got from that chunk is that precisely quantifying utilons may be noncomputable, and even if not is currently intractable, but that doesn't matter. We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons, but in principle that doesn't change the appropriate response, if we were to be offered one. So there is definitely higher potential for regret with the second bet, since losing a bunch when I could otherwise have gained a bunch, and that would reduce my utility for that case, but for the statement 'you will receive -90 utilons' to be true, it would have to include the consideration of my regret. So I should not add additional compensation for the regret; it's factored into the problem statement. Which boils down to me being unintuitively indifferent, with even the slight uncomfortable feeling of being indifferent when intuition says I shouldn't be factored into the calculations.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn't it? I am not convinced that utilons automagically include everything -- it seems to me they wouldn't be consistent between different bets in that case (and, of course, each person has his own personal utilons which are not directly comparable to anyone else's).
If utilons don't automagically include everything, I don't think they're a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn't have that property, and doesn't seem any more useful than denominating things in $. Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say 'All things being equal, and ignoring all outside factors', encapsulated as a fictional substance.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn't include everything is not a useful concept? And I'm still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant? It all feels very hand-wavy. Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that's covariant under a positive affine transformation should be permitted.
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it's very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy. It's no more abstract than is Bayes' Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation. The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
Some utility functions value world-states. But it's also quite common to call a "utility function" something that shows/tells/calculates how much you value something specific. I am not sure of that. Utility functions often map to ranks, for example.
I'm not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical "abuse of notation", where it actually referred to "the utility of the world in which exists over the otherwise-identical world in which did not exist", but where the subtleties are not relevant to the example at hand and are taken as understood. Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I've always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn't important.)
Um, Wikipedia [http://en.wikipedia.org/wiki/Utility_function]?
That's an example of the rank ordering, but not of the first thing I asked for.
The entire concept of utility in Wikipedia is the utility of specific goods, not of world-states.
Hmmm... bet 1, expected utils gained = 10. Bet 2, expected utils gained = 10. I am not risk-neutral, and so I prefer bet 1; I don't like the high odds of losing utils in bet 2.
His point is that the upside is bounded much more than the downside.

Yes, but my point is that this is also true for, say, leaving the house to have fun.

This is not always true (as Taleb himself points out in The Black Swan): in investing the worst that can happen is you loss all of your principle, the best that can happen is unbounded.
What? He's crusading against GMOs? Can you give me some references? I like his writing a lo, but I remember noting the snide way he dismissed doctors who "couldn't imagine" that there could be medicinal benefit to mother's milk, as if they were arrogant fools.
My source were his tweets. Sorry if I can't give anything concrete right now, but "Taleb GMO" apparently gets a lot of hits on google. I didn't really dive into it, but as I understood it he takes the precautionary principle (the burden of proof of safety is on GMOs, not of danger on opponents) and adds that nobody can ever really know the risks, so the burden of proof hasn't and can't be met. "They're arrogant fools" seems to be Taleb's charming way of saying "they don't agree with me". I like him too. I loved The Black Swan and Fooled by Randomness back when I read them. But I realized I didn't quite grok his epistemology a while back, when I found him debating religion with Dennett, Harris and Hitchens. Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of "science can't know everything". (www.youtube.com/watch?v=-hnqo4_X7PE) I've been meaning to ask Less Wrong about Taleb for a while, because this just seems kookish to me, but it's entirely possible that I just don't get something.
I feel like it should be pointed out that being kookish and being a source of valuable insight are not incompatible.
"Can't know" is misses the point. Doesn't know, is much more about what Taleb speaks about. Robin Hanson lately wrote a post against being a rationalist [http://www.overcomingbias.com/2014/06/you-cant-handle-the-truth.html]. The core of Nassim arguments is to focus your skepticism where it matters. The cost of mistakenly being a Christian is low. The cost of mistakenly believing that your retirement portfolio is secure is high. According to Taleb people like the New Atheists should spend more of their time on those beliefs that actually matter. It's also worth noting that the new atheists aren't skeptics in the sense that they believe it's hard to know things. Their books are full of statements of certainity [http://www.tricycle.com/blog/language-certainty-new-atheism]. Taleb on the other hand is a skeptic in that sense. For him religion also isn't primarily about believing in God but about following certain rituals. He doesn't believe in cutting Chelstrons fence with Ockham's razor.

The cost of mistakenly being a Christian is low

That's not self-evident to me at all.

It's not self-evident, but the new atheists don't make a good argument that it has a high cost. Atheist scientists in good standing like Rob Baumeister say that being religious helps with will power. Being a Mormon correlates with characteristics and therefore Mormon sometimes recognize other Mormons. Scientific investigation found that the use marker of being healthy for doing so and those markers can't be used for identifying Mormons. There's some data that being religious correlates with longevity. Of course those things aren't strong evidence that being religious is beneficial, but that's where Chesterton's fence comes into play for Taleb. He was born Christian so he stays Christian. While my given name is Christian, I wasn't raised a Christian or believed in God at any point in my life and the evidence doesn't get my to start being a Christian but I do understand Taleb's position. Taleb doesn't argue that atheists should become Christians either.
(If there is something called "Chelston's Fence" (which my searches did not turn up), apologies.) Chesterton's Fence isn't about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can't see any, and finding out those reasons before countering their actions. In Christianity's case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity's incompetence at understanding the universe) that Chesterton's Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
If Christianity would lower the willpower of it's members then it would be at a disadvantage in memetic competition against other worldviews that increase willpower. Predicting complex systems like memetic competition over the span of centuries between different memes is very hard. In cognitive psychology experiments frequently invalidate basic intuitions about the human mind. Trust bootstrapping is certainly one of the functions of religion but it's not clear that's bad. Bootstrapping trust is generally a hard problem. Trust makes people cooperate. If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust. As far as "antiquity's incompetence at understanding the universe" goes, understanding the universe is very important to people like the New Atheists but it's for Taleb it's not the main thing religion is about. For him it's about practically following a bunch of rituals such as being at church every Sunday.
I often see this argument from religions themselves or similar sources, not from those opposed to religion. Not this specific argument, but this type of argument--the idea of using the etymology of a word to prove something about the concept represented by the word. As we know or should know, a word's etymology may not necessarily have much of a connection to what it means or how it is used today. ("malaria" means "bad air" because of the belief that it was caused by that. "terrific" means something that terrifies.) Also consider that by conservation of expected evidence [http://lesswrong.com/lw/ii/conservation_of_expected_evidence/] if the etymology of the word is evidence for your point, if that etymology were to turn out to be false, that would be evidence against your point. Would you consider it to be evidence against your point if somehow that etymology were to be shown false?
In this case the debate is about how people in the past thought about religion. Looking at etymology helps for that purpose. But that not the most important part of my argument. It can also help to illustrate ideas. Taleb basically says that religion1 is a very useful concept. New atheists spend energy arguing that religion2 is a bad concept. That's pointless if they want to convince someone who believes in religion1. If they don't want to argue against a strawman they actually have to switch to talking about religion1. In general when someone says: "We should do A.", that person has freedom to define what he means with A. It's not a matter of searching for Bayesian evidence. It's a matter of defining a concept. If you want to define A saying: A is a bit like B in regard X and like C in regard Y is quite useful. Looking at etymology can help with that quest. Overestimating the ability to understand what the other person means is a common failure mode. If you aren't clear about concepts than looking at evidence to validate concepts isn't productive.
But you could say that the new atheists do want to argue against what Taleb might call a strawman, because what they're trying to do really is to argue against religion2. They're speaking to the public at large, to the audience. Does the audience also not care about the factual claims of religion? If that distinction about the word "religion" is being made, I don't see why Taleb isn't the one being accused of trying to redefine it mid-discussion.
If you look at priorities of most people that they show through their actions, truth isn't on top of that list. Most people lie quite frequently and optimize for other ends. Just take any political discussion and see how many people are happy to be correctly informed that their tribal beliefs are wrong. That probably even goes for this discuss and you have a lot of motivated cognition going on that makes you want to believe that people really care about truth. When speaking on the subject of religion Taleb generally simply speaks about his own motivation for believing what he believes. He doesn't argue that other people should start believing in religion. Taleb might child people for not being skeptic where it matters but generally not for being atheists. Nearly any religious person while grant you that some religions are bad. As long as the new atheists argue against a religion that isn't really his religion he has no reason to change. I would also add that it's quite okay when different people hold different beliefs.
I agree with the apparent LW consensus that much of religion is attire, habit, community/socializing, or "belief in belief", if that's what you mean. But then again, people actually do care about the big things, like whether God exists, and also about what is or isn't morally required of them. I bet they will also take Taleb's defense as an endorsement of God's existence and the other factual claims of Christianity. I don't recall him saying that he's only a cultural Christian and doesn't care whether any of it is actually true. Well, I won't force anyone to change, but there's good and bad epistemology. Also, the kind of Chesterton's fences that the new atheists are most interested in bringing down aren't just sitting there, but are actively harmful (and they may be there as a result of people practicing what you called religion1, but their removal is opposed with appeals to religion2).
You take a certain epistemology for granted that Taleb doesn't share. Taleb follows heuristics of not wanting to be wrong on issues where being wrong is costly and putting less energy into updating beliefs on issues where being wrong is not costly. He doesn't care about whether Christianity is true in the sense that he cares about analysing evidence about whether Christianity is true. He might care in the sense that he has an emotional attachment to it being true. If I lend you a book I care about whether you give it back to me because I trust you to give it back. That's a different kind of caring than I have about pure matter of facts. One of Taleb's examples is how in the 19th century someone who went through to a doctor who would treat him based on intellectual reasoning would have probably have done worse than someone who went to a priest. Taleb is skeptic that you get very far with intellectual reasoning and thinks that only empiricism has made medicine better than doing nothing. We might have made some progress but Taleb still thinks that there are choices where the Christian ritual will be useful even if the Christian ritual is build on bad assumptions, because following the ritual keeps people from acting based on hubris. It keeps people from thinking they understand enough to act based on understanding. That's also the issue with the new atheists. They are too confident in their own knowledge and not skeptic enough. That lack of skepticism is in turn dangerous because they believe that just because no study showed gene manipulated plants to be harmful they are safe.
(thank you for helping me try to understand him on this point, by the way) This seems coherent. But, to be honest, weak (which could mean I still don't get it). We also seem to have gotten back to the beginning, and the quote. Leaving aside for now the motivated stopping [http://lesswrong.com/lw/km/motivated_stopping_and_motivated_continuation/] regarding religion, we have a combination of the Precautionary Principle, the logic of Chesterton's Fence, and the difficulty of assessing risks on account of Black Swans. ... which would prescribe inaction in any question I can think of. It looks as if we're not even allowed to calculate the probability of outcomes, because no matter how much information we think we have, there can always be black swans just outside our models. Should we have ever started mass vaccination campaigns? Smallpox was costly, but it was a known, bounded cost that we had been living with for thousands of years, and, although for all we knew the risks looked obviously worth it, relying on all we know to make decisions is a manifestation of hubris. I have no reason to expect being violenty assaulted when I go out tonight, but of course I can't possibly have taken all factors in consideration, so I should stay home, as it will be safer if I'm wrong. There's no reason to think pursuing GMOs will be dangerous, but that's only considering all we know, which can't be enough to meet the burden of proof under the strong precautionary principle. There's not close to enough evidence to even locate Christianity in hypothesis space, but that's just intellectual reasoning... We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn't a catastrophic black swan hiding behind that Fence?
The phrase "no reason to think" should raise alarm bells. It can mean we've looked and haven't found any, or that we haven't looked.
There's no reason to think that there's a teapot-shaped asteroid resembling Russell's teapot either. And I'm pretty sure we haven't looked for one, either. Yet it would be ludicrous to treat it as if it had a substantial probability of existing.
A prior eating most things is a bad idea. Thus the burden is on the GMO advocates to show their products are safe.
Note that probably all crops are "genetically modified" by less technologically advanced methods. I'm not sure if that disproves the criticism or shows that we should be cautious about eating anything.
We should be cautious about eating anything that doesn't have a track record of being safe.
You changed your demand. If GM crops have less mutations than conventional crops, which are genetically modified by irradiation + selection (and have a track record of being safe), this establishes that GM crops are safe, if you accept the claim that, say, the antifreeze we already eat in fish is safe. Requiring GM crops themselves to have a track record is a bigger requirement.'
No, I'm saying we need some track record for each new crop including the GMO ones, roughly proportionate to how different they are from existing crops.
Yes, this is different from merely "showing that GMO products are safe". Because we also have the inside view.
I agree with this. But then we look, and this turns into "we haven't looked enough". Which can be true, so maybe we go "can anyone think of something concrete that can go wrong with this?", and ideally we will look into that, and try to calculate the expected utility. But then it becomes "we can't look enough - no matter how hard we try, it will always be possible that there's something we missed". Which is also true. But if, just in case, we decide to act as if unknown unknowns are both certain and significant enough to override the known variables, then we start vetoing the development of things like antibiotics or the internet, and we stay Christians because "it can't be proven wrong".
Its worst impact was and is in Sub-Saharan Africa where the "laws and customs against homosexuality" are fully in place.
The history here [http://www.avert.org/history-hiv-aids-africa.htm] says the African epidemic was spread primarily heterosexually. There is also the confounder of differing levels of medical facilities in different countries. That aside, which is not to say that Africa does not matter, in the US and Europe the impact was primarily in the gay community. I recognise that this is a contentious area though, and would rather avoid a lengthy thread.
The point was just that we should be allowed to weight expected positives against expected negatives. Yes, there can be invisible items in the "cons" column (also on the "pros"), and it may make sense to require extra weight on the "pros" column to account for this, but we shouldn't be required to act as if the invisible "cons" definitely outweigh all "pros".
This suggests we actually need laws and customs against promiscuity. Or just better public education re STIs.
Sorry for the typo.
I think that Taleb has one really good insight -- the Black Swan book -- and then he decided to become a fashionable French philosopher...

Yet none of these sights [of the Scottish Highlands] had power, till a recent period, to attract a single poet or painter from more opulent and more tranquil regions. Indeed, law and police, trade and industry, have done far more than people of romantic dispositions will readily admit, to develope in our minds a sense of the wilder beauties of nature. A traveller must be freed from all apprehension of being murdered or starved before he can be charmed by the bold outlines and rich tints of the hills. He is not likely to be thrown into ecstasies by the abruptness of a precipice from which he is in imminent danger of falling two thousand feet perpendicular; by the boiling waves of a torrent which suddenly whirls away his baggage and forces him to run for his life; by the gloomy grandeur of a pass where he finds a corpse which marauders have just stripped and mangled; or by the screams of those eagles whose next meal may probably be on his own eyes.

Thomas Babington Macaulay, History of England

Frankly, the whole passage Steve Sailer quotes at the link is worth reading.

For those (I have some reason to think there are some) who would rather avoid giving Steve Sailer attention or clicks, or who would like more context than he provides, you can find the relevant chapter at Project Gutenberg [http://www.gutenberg.org/files/2612/2612-h/2612-h.htm#link2HCH0003] along with the rest of volume 3 of Macaulay's History. (The other volumes are Gutenbergificated too, of course.) Macaulay's chapters are of substantial length; if you want just that section, search for "none of these sights" after following the link.

Katara: Do you think we'll really find airbenders?

Sokka: You want me to be like you, or totally honest?

Katara: Are you saying I'm a liar?

Sokka: I'm saying you're an optimist. Same thing, basically.

-Avatar: The Last Airbender

"You sound awfully sure of yourself, Waterhouse! I wonder if you can get me to feel that same level of confidence."

Waterhouse frowns at the coffee mug. "Well, it's all math," he says. "If the math works, why then you should be sure of yourself. That's the whole point of math."

-- Cryptonomicon by Neal Stephenson

This quote seems to me like it touches a common fallacy: that making "confident" probability estimates (close to 0 or 1) is the same as being a "confident" person. In fact, they're ontologically distinct.
Was the context one where Waterhouse was proving a conditional, "if axioms A, B, C, then theorem Z", or one where where he was trying to establish Z as a truth about the world, and therefore also had the burden of showing that axioms A, B, C were supported by experimental evidence?
Neither! The statement he is 'awfully sure of' is a probalistic conclusion he has derived from experimental evidence via Bayesian reasoning on the world's first programmable computer. Specifically, that statement is this: Part of the argument used to convince Comstock:

Elinor agreed to it all, for she did not think he deserved the compliment of rational opposition.

Jane Austen, Sense and Sensibility.

Ambivalent about this one. I like the idea of rational argument as a sign of intellectual respect, but I don't like things that are so easy to use as fully general debate stoppers, especially when they have a built-in status element.
But note that Elinor doesn't use it as a debate stopper, or to put down or belittle Ferrers. She simply chooses not to engage with his arguments, and agrees with him.
(I haven't read the book) The way I usually come in contact with something like this is afterwards, when Elinor and her tribe are talking about those irrational greens, and how it's better to not even engage with them. They're just dumb/evil, you know, not like us. Even without that part, this avoids opportunities for clearing up misunderstandings. (anecdotally: some time ago a friend was telling me about discussions that are "just not worth having", and gave as an example "that time when we were talking about abortion and you said that X, I knew there was just no point in going any further". Turns out she had misunderstood me completely, and I actually had meant Y, with which she agrees. Glad we could clear that out - more than a year later, completely by accident. Which makes me wonder how many more of those misunderstandings are out there)

I feel it myself, the glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands. To release the energy that fuels the stars. To let it do your bidding. And to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power, and it is in some ways responsible for all our troubles... this is what you might call ‘technical arrogance’ that overcomes people when they see what they can do with their minds.

-- Freeman Dyson

Airplanes may not work on fusion or weigh millions of tons, but still, substituting a few words in I could say similar things about airplanes. Or electrical grids. Or smallpox vaccination. But nobody does.

Hypothesis: he has an emotional reaction to the way nuclear weapons are used--he thinks that is arrogant--and he's letting those emotions bleed into his reaction to nuclear weapons themselves.

Are you sure? I looked for just a bit and found http://inventors.about.com/od/wstartinventors/a/Quotes-Wright-Brothers.htm [http://inventors.about.com/od/wstartinventors/a/Quotes-Wright-Brothers.htm] I imagine if inventors have bombastic things to say about the things they invent, then frequently keep those thoughts to oneself to avoid sounding arrogant (e.g. I don't think it would have gone over well if Edison had started referring to himself as "Edison, the man who lit the world of the night").
I meant that nobody accuses people awed by airplanes of being arrogant; I didn't mean that nobody is awed by airplanes. (BTW, I wouldn't be surprised if Edison did say something similar; he was notorious for self-promotion.)

It’s tempting to think of technical audiences and general audiences as completely different, but I think that no matter who you’re talking to, the principles of explaining things clearly are the same. The only real difference is which things you can assume they already know, and in that sense, the difference between physicists and the general public isn’t necessarily more significant than the difference between physicists and biologists, or biologists and geologists.

Reminds me of Expecting Short Inferential Distances.

Penny Arcade takes on the question of the economic value of a sacred thing. Script:

Gabe: Can you believe Notch is gonna sell Minecraft to MS?

Tycho: Yes! I can!

Gabe: Minecraft is, like, his baby though!

Tycho: I would sell an actual baby for two billion dollars.

Tycho: I would sell my baby to the Devil. Then, I would enter my Golden Sarcophagus and begin the ritual.

In a study recently published in the journal PloS One, our two research teams, working independently, discovered that when people are presented with the trolley problem in a foreign language, they are more willing to sacrifice one person to save five than when they are presented with the dilemma in their native tongue.

One research team, working in Barcelona, recruited native Spanish speakers studying English (and vice versa) and randomly assigned them to read this dilemma in either English or Spanish. In their native tongue, only 18 percent said they woul

... (read more)
This quote implies a connection from "people react less strongly to emotional expressions in a foreign language" to "dilemmas in a foreign language don't touch the very core of our moral being". Furthermore, it connects or equates being more willing to sacrifice one person for five and "touch[ing] the core of our moral being" less. All rational people should object to the first implication, and most should object to the second one. This is a profoundly anti-rational quote, not a rationality quote.
I think you're reading a lot into that one sentence. I assumed that just to mean "there should not be inconsistencies due to irrelevant aspects like the language of delivery". Followed by a sound explanation for the unexpected inconsistency in terms of system 1 / system 2 thinking. (The final paragraph of the article begins with "Our research does not show which choice is the right one.")
What do we suppose is meant by 'the very core of our moral being'? If people react differently depending on language, isn't that evidence that there is a connection? Or at least that the moral core is doing something different?
I disagree with Jiro and Salemicus. Learning about how human brains work is entirely relevant to rationality.
Someone who characterized the results the way they characterize them in this quote has learned some facts, but failed on the analysis. It's like a quote which says "(correct mathematical result) proves that God has a direct hand in the creation of the world". That wouldn't be a rationality quote just because they really did learn a correct mathematical result.
There are a lot of senses of 'quote' which I agree this does not fit well, but in the 'excerpt from an interesting article' sense I think it is, well, interesting.
I agree with Jiro, this appears to be an anti-rationality quote. The most straightforward interpretation of the data is that people didn't understand the question as well when posed in a foreign language. Chalk this one up not to emotion, but to deontology.
Possible that they understood the question, but hearing it in a foreign language meant cognitive strain, which meant they were already working in System 2. That's my read anyway. Given to totally fluent second-language speakers, I bet the effect vanishes.
It's also possible that asking a different language causes subjects to think of the people in the dilemma as "not members of their tribe".

Dreams demonstrate that our brains (and even rat brains) are capable of creating complex, immersive, fully convincing simulations. Waking life is also a kind of dream. Our consciousness exists, and is shown particular aspects of reality. We see what we see for adaptive reasons, not because it is the truth. Nerds are the ones who notice that something is off - and want to see what's really going on.

The View from Hell from an article recommended by asd.

The easy way to make a convincing simulation is to disable the inner critic.

The inner critic that is disabled during regular dreaming turns back on during lucid dreaming. People who have them seem to be quite impressed by lucid dreams.
You still can't focus on stable details.
You can with training. It is a lot like training visualization: In the beginning, the easiest things to visualize are complex moving shapes (say a tree with wind going through it), but if you try for a couple of hours, you can get all the way down to simple geometric shapes.

We see what we see for adaptive reasons, not because it is the truth.


Nature cannot be fooled.

-- Feynman

One might even FTFY the first quote as:

"We see what we see for adaptive reasons, because it is the truth."

This part:

Nerds are the ones who notice that something is off - and want to see what's really going on.

is contradicted by the context of the whole article. The article is in praise of insight porn (the writer's own words for it) as the cognitive experience of choice for nerds (the writer's word for them, in whom he includes himself and for whom he is writing) while explicitly considering its actual truth to be of little importance. He praises the experience of reading Julian Jaynes and in the same breath dismisses Jaynes' actual claims as "batshit insane and obviously wrong".

In other words, "Nerds ... want to see what's really going on" is, like the whole article, a statement of insight porn, uttered for the feeling of truthy insight it gives, "not because it is the truth".

How useful is this to someone who actually wants "to see what's really going on"?

Insight porn, in other words?
I downvoted this and another comment further up for not being about anything but nerd pandering, which I feel is just ego-boosting noise. Not the type of content I want to see on here.
I think the comment in this thread would have been equally relevant and possibly better without the last sentence, but don't see how the Cryptonomicon quote (which I assume to be the one you meant?) as nerd-pandering, since it doesn't imply value judgments from it about being or identifying as a nerd.
The Cryptonomicron quote was great, I was talking about its child comment [http://lesswrong.com/lw/kwd/rationality_quotes_september_2014/ba3s].
Well, if you think the quote doesn't say significantly more than "nerds are great" you are right to downvote it.
That or the extent of the human capacity for pareidolia on waking.

Yeah I have a lot of questions. Like, is this Star Trek style where it's transmitting my matter as energy and reconstructing it on the other end, or is it just creating an exact duplicate of me and I'm really just committing suicide over and over? Hmm, no, I don't feel dead, but am I me, or am I Gordon #6? I might not know the difference. Well, I should continue either way. Even if that means making sacrifices for the Greater Gordon. I mean I can't think of a cause I believe in more than that!

Gordon Freeman, Freeman's Mind

It's really weird how [Stop, Drop, and Roll] is taught pretty much yearly but personal finance or ethics usually just have one class at the end of highschool.

-- CornChowdah, on reddit

Yay for personal finance, boo for ethics, which is liable to become a mere bully pulpit for teachers' own views.
It might be possible (and useful) to design an ethics curriculum that helps students to think more clearly about their own views, though, without giving their teachers much of an excuse to preach.
Thinking back to my own religious high school education, I realize that the ethics component (though never called out as such, it was woven into the curriculum at every level) was indeed important; not so much because of the specific rules they taught and didn't teach; as simply in teaching me that ethics and morals were something to think about and discuss. Then again, this was a Jesuit school; and Jesuit education has a reputation for being somewhat more Socratic and questioning than the typical deontological viewpoint of many schools. But in any case, yay for personal finance.

If the people be led by laws, and uniformity sought to be given them by punishments, they will try to avoid the punishment, but have no sense of shame. If they be led by virtue, and uniformity sought to be given them by the rules of propriety (could be translated as 'rites'), they will have the sense of the shame, and moreover will become good.

In the Great Learning (大學) by Confucius, translated by James Legge

Interestingly I found this in a piece about cancer treatment. An possibly underused well-application of Fluid Analogies.

A lot of people believe fruit juices to be healthy. They must be… because they come from fruit, right? But a lot of the fruit juice you find in the supermarket isn’t really fruit juice. Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit. What you’re drinking is basically just fruit-flavored sugar water. That being said, even if you’re drinking 100% quality fruit juice, it is still a bad idea. Fruit juice is like fruit, except with all the good stuff (like the fiber) taken out… the main thing left of the actual

... (read more)
Mostly correct, but only very loosely related to rationality. Vitamins also are good stuff but they aren't taken out (or when they are they usually are put back in, AFAIK).
Rationality involves having accurate beliefs. If lots of people share a mistaken belief that causes them to take harmful actions then pointing out this mistake is rationality-enhancing.
[-][anonymous]8y 12

pointing out this mistake is rationality-enhancing

The way giving someone a fish is fishing skill-enhancing, I'd guess...

Well, not quite. This particular mistake has a general lesson of ‘what you know about what foods are healthy may be wrong’ and an even more general one ‘beware the affect heuristic’, but there probably are more effective ways to teach the latter.

But the quote isn't attempting to teach a general lesson, it's attempting to improve one particular part of peoples' mental maps. If lots of people have an error in their map, and this error causes many of them to make a bad decision, then pointing out this error is rationality-enhancing.
No, that makes it a useful factoid. I don't consider my personal rationality enhanced whenever I learn a new fact, even if it is useful, unless it will reliably improve my ability to distinguish true beliefs from false ones in the future.
A search brings up http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=101.30 [http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=101.30] . This seems to contradict the claim that "Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit," since it would have to say "contains less than 1% juice" or not be described as juice at all.

One of the key concepts in Common Law is that of the reasonable man. Re-reading A.P. Herbert, it struck me how his famously insulting description of the reasonable man bears a deep resemblance to that of the ideal rationalist:

It is impossible to travel anywhere or to travel for long in that confusing forest of learned judgments which constitutes the Common Law of England without encountering the Reasonable Man. He is at every turn, an ever-present help in time of trouble, and his apparitions mark the road to equity and right. There has never been a probl

... (read more)

I imagine that something of a similar sentiment animates much of popular hostility to LessWrong-style rationalism.

I'm not convinced. I know a few folks who know about LW and actively dislike it; when I try to find out what it is they dislike about it, I've heard things like —

  • LW people are personally cold, or idealize being unemotional and criticize others for having emotional or aesthetic responses;
  • LW teaches people to rationalize more effectively their existing prejudices — similar to Eliezer's remarks in Knowing About Biases Can Hurt People;
  • LW-folk are overly defensive of LW-ideas, hold unreasonably high standards of evidence for disagreement with them, and dismiss any disagreement that can't meet those standards as a sign of irrationality;
  • LW has an undercurrent of manipulation, or seems to be trying to trick people into supporting something sinister (although this person could not say what that hidden goal was, which implies that it's something less overt than "build Friendly AI and take over — er, optimize — the world");
  • LW is a support network for Eliezer's approaches to superhuman AI / the Singularity, and Eliezer is personally not trustworthy as a leader of
... (read more)
I wonder how these people who dislike LW feel about geeks/nerds in general.
Most of them are geeks/nerds in general, or at least have seen themselves as such at some point in their lives.
Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide [http://www.vox.com/2014/8/1/5959635/heres-the-full-text-of-the-deleted-time-of-israel-post-backing] . That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here [http://lesswrong.com/lw/l1r/the_puzzle_of_faith_and_belief/be41], Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides. There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.) Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.
That was pretty subtle, actually. You had my blood boiling at the end of the first paragraph and I was about to downvote. Luckily I decided to read the rest.
That makes me more curious; I have the feeling there's quite a bit of anti-geek/nerd sentiment among geeks/nerds, not just non-nerds. (Not sure how to write the above sentence in a way that doesn't sound like an implicit demand for more information! I recognize you might be unable or unwilling to elaborate on this.)
Your theory may have some value. But let's note that I don't know what it means to cross an instrument 'a/c Payee only', and I'll wager most other people don't know. Do you think most UK citizens did in 1935?
The use of the word "instrument" makes the phrase more obscure than it needs to be, but it refers to the word "cheque" earlier in the sentence. I suspect most modern British people probably don't know what it means, but most will have noticed that all the cheques in a chequebook have "A/C Payee only" written vertically across the middle - or at least those old enough to have used cheques will! But people in 1935 would have most likely known what it meant, because 1) in those days cheques were extremely widespread (no credit or debit cards) and 2) unlike today, cheques were frequently written by hand on a standard piece of paper (although chequebooks did exist). The very fact that the phrase was used by a popular author writing for a mass audience (the cases were originally published in Punch and The Evening Standard) should incline you in that direction anyway. Note incidentally that Herbert's most famous case is most likely The Negotiable Cow [http://en.m.wikipedia.org/wiki/Board_of_Inland_Revenue_v_Haddock].
Just fyi, my checks don't say anything like that, and the closest I can find on Google Images just says, "Account Payee."
I don't know for sure, but judging from context I'd say it's probably instructions as to the disposition of a check -- like endorsing one and writing "For deposit only" on the back before depositing it into the bank, as a guarantee against fraud. Granted, in these days of automatic scanning and electronic funds transfer that's starting to look a little cobwebby itself.

You know how people are always telling you that history is actually really interesting if you don’t worry about trivia like dates? Well, that’s not history, that’s just propaganda. History is dates. If you don’t know the date when something happened, you can’t provide the single most obvious reality check on your theory of causation: if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards.

Steve Sailer

Agree with the general point, though I think people complaining about dates in history are referring to the kind of history that is "taught" in schools, in which you have to e.g. memorize that the Boston Massacre happened on March 5, 1770 to get the right answer on the test. You don't need that level of precision to form a working mental model of history.
You do need to know dates at close to that granularity if you're trying to build a detailed model of an event like a war or revolution. Knowing that the attack on Pearl Harbor and the Battle of Hong Kong both happened in 1941 tells you something; knowing that the former happened on 7 December 1941 and the latter started on 8 December tells you quite a bit more. On the other hand, the details of wars and revolutions are probably the least useful part of history as a discipline. Motivations, schools of thought, technology, and the details of everyday life in a period will all get you further, unless you're specifically studying military strategy, and relatively few of us are.
A particularly stark example may be the exact dates of bombing of Hiroshima, Nagasaki, and official surrender. Helps deal with theories such as "they had to drop a bomb on Nagasaki because Japan didn't surrender".
Be careful. That sounds reasonable until you also learn that the Japanese war leadership didn't even debate Hiroshima or Nagasaki for more than a brief status update after they happened, yet talk of surrender and the actual declaration immediately folowed declaration of war by the Soviets and landing of troops in Mancheria and the Sakhalin islands. Japan, it seems, wanted to avoid the German post-war fate of a divided people. The general problem with causation in history is that you often don't know what you don't know. (It's a tangential point, I know.)
I'm not necessarily saying this is wrong, but I don't think it can be shown to be significantly more accurate than the "bomb ended the war" theory by looking at dates alone. The Soviet declaration of war happened on 8 August, two days after Hiroshima. Their invasion of Manchuria started on 9 August, hours before the Nagasaki bomb was dropped, and most sources say that the upper echelons of the Japanese government decided to surrender within a day of those events. However, their surrender wasn't broadcast until 15 August, and by then the Soviets had opened several more fronts. (That is, that's when Emperor Hirohito publicized his acceptance of the Allies' surrender terms. It wasn't formalized until 2 September, after Allied occupation had begun.) Dates aside, though, it's fascinating to read about the exact role the Soviets played in the end of the Pacific War. Stalin seems to have gotten away with some spectacularly Machiavellian moves.
That was my point. It can be shown to be significantly more accurate, but not by looking at the dates alone.
This tells me that the order of events is important, and not the actual dates themselves. It is true that, if I want to claim that X caused Y, I need to know that X happened before Y; but it does not make any difference whether they both happened in 1752 or 1923.

Dates are a very convenient way of specifying the temporal order of many different events.

The time between them also matters. If X happened a year before Y it is more plausible that X caused Y then if X happened a century before Y.
Great. I have approximately 6000 years worth of events here, happening across multiple continents, with overlapping events on every scale imaginable from "in this one village" to "world war." If you can keep the relationships between all those things in your memory consistently using no index value, go for it. If not, I might recommend something like a numerical system that puts those 6000 years in order. I would not recommend putting "0" at a relatively arbitrary point several thousand years after the events in question have started.
I do agree that an index value is a very useful and intuitive-to-humans way to represent the order of events, especially given the sheer number of events that have taken place through history. However, I do think it's important to note that the index value is only present as a representation of the order of events (and of the distance between them, which, as other commentators have indicated, is also important) and has no intrinsic value in and of itself beyond that.
It's not just the order but the distance that matters. If you want to say that X caused Y, but X happened a thousand years before Y, chances are that you're at the very least ignoring a lot of additional causes. In the end, I think, dates are important. It's only the arbitrary positioning of a starting date (e.g. Christian vs. Jewish vs. Chinese calendar) that genuinely doesn't matter; but even that much is useful for us to talk about historical events. I.e. it doesn't really matter where we put year 0, but it matters that we agree to put it somewhere. (Ideally we would have put it somewhat further back in time, maybe nearer the beginning of recorded history, so we didn't have to routinely do BCE/CE conversions in our heads, but that ship has sailed.)
Or that the interval between X and Y is spacelike, and neither is in the other's forward light cone... :)
Some day the light speed delay might become an issue in historical investigations, but not quite yet :) Even then in the statement "if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards" the term "before" implies that one event is in the causal future of the other.
"Dateless history" can be interesting without being accurate or informative. As long as I don't use it to inform my opinions on the modern world either way, it can be just as amusing and useful as a piece of fiction.

Perceiving magic is precisely the same thing as perceiving the limits of your own understanding.

-Jaron Lanier, Who Owns the Future?, (e-reader does not provide page number)

That doesn't seem quite true... if I'm confused while reading a textbook, I may be perceiving the limits of my understanding but not perceiving magic.
Agreed. I think what Lanier should have said that a perception of magic is a subset of things one doesn't understand, rather than claiming that they are equal. Bugs that I am currently hunting but haven't nailed down are things I don't understand, but they certainly don't seem magical.
At least you hope not.
The percept of magic, given its possible hallucination or implantation, is not necessarily an instance of limited understanding; certainly not in the relevant sense here, at least.
You could also be perceiving something way way past the limits of your own understanding, or alternately perceiving something which would be well within the limits of your understanding if you were looking at it from a different angle

-- Mother Gaia, I come on behalf of all humans to apologize for destroying nature (...). We never meant to kill nature.

-- You're not killing nature, you're killing yourself. That's what I mean by self-centered. You think that just because you can't live, then nothing can. You're fucking yourself over big time, and won't be missed.

From a surprisingly insightful comic commenting on the whole notion of "saving the planet".

This framing is marginally saner, but the weird panicky eschatology of pop-environmentalism is still present. Apparently the author thinks that using up too many resources, or perhaps global warming, currently represent human extinction level threats?

You can’t see anything properly while your eyes are blurred with tears.

-- C. S. Lewis, A Grief Observed

"... Is it wrong to hold on to that kind of hope?"

[having poisoned her] "I have not come for what you hoped to do. I've come for what you did."

  • V for Vendetta (movie).
Given that you've said in another thread that you consider "blame" an incoherent concept, I don't understand what you think this quote means.
That people will judge your morality by your actions without regard to your intentions. I don't claim that V is particularly rational, but he embodies (exaggerated versions of) traits that real people have. Our moral decisions have consequences in how we are treated.
This is what most people mean by "blame".
Possibly in the eyes of the future, if there is one, we'll all look like brain-damaged children who aren't morally to blame for much of anything. Our actions still have consequences (for example, they might determine whether humanity has a future).
Blame is not the action of treating someone differently because of their moral choices, it's the rationale for doing so. I think the rationale is incoherent, but the actions still exist.

Oromis asked, “Can you tell me, what is the most important mental tool a person can possess?”

[Eragon makes a few wrong guesses, like determination and wisdom.]

“A fair guess, but, again, no. The answer is logic. Or, to put it another way, the ability to reason analytically. Applied properly, it can overcome any lack of wisdom, which one only gains through age and experience.”

Eragon frowned. “Yes, but isn’t having a good heart more important than logic? Pure logic can lead you to conclusions that are ethically wrong, whereas if you are moral and righteous,

... (read more)
-- Eragon and Angela, Brisingr, by the same author
Someone who says something like the first sentence generally means something like "questions that are significant and in an area I am concerned with". They don't mean "I don't know exactly how many atoms are in the moon, and I find that painful" (unless they have severe OCD based around the moon), and to interpret it that way is to deliberately misinterpret what the speaker is saying so that you can sound profound. But then, I've been on the Internet. This sort of thing is an endemic problem on the Internet, except that it's not always clear how much is deliberate misinterpretation and how much is people who just don't comprehend context and implication. (Notice how I've had to add qualifiers like 'generally' and "except for (unlikely case)" just for preemptive defense against that sort of thing.)
If you don't have any open questions in that category, then you aren't really living as an intellectual. In science questions are like a hydra. After solving a scientific problem you often have more questions than you had when you started. Schwartz's article [http://jcs.biologists.org/content/121/11/1771.full] on the issue is quite illustrative. If you can't deal with the emotional effects that come with looking at an open question and having it open for months and years you can't do science. You won't contribute anything to the scientific world of ideas if you can only manage to concerned with an open question for an hour and not for months and years. Of course there are plenty person in the real world who don't face questions with curiosity but who in pain when dealing with them. To me that seems like a dull life to live. because the question doesn't concern themselves with living an intellectual life.
I'm not sure that's a critical part of any definition of the word "intellectual".
It's not sufficient to be an intellectual but if you don't care about questions that aren't solved in short amounts of time because that's very uncomfortable for you, you won't have a deep understanding of anything. You might memorise the teacher password in many domains but that's not what being an intellectual is about.
"You must spend every waking hour in mortal agony, for life is full of unanswerable questions." carries the connotation that someone cannot answer large numbers of every day questions, not that they can't answer a few questions in specialized areas. But the original statement about unanswered questions being painful, in context, does connote that they are referring to a few questions in specialized areas.
In this case it illustrates how the character in question couldn't really imagine living a life without unanswered questions. Given that it's a Science Elf that fits. For him daily life is about deep questions.
"Unanswered questions" connotes different things in the two different places, though. In one place it connotes "all unanswered questions of whatever kind" and in another it connotes "important unanswered questions". The "cleverness" of the quote relies on confusing the two.
Important depends on whether you care about something. If you have a scientific mindset than you care about a lot of questions and want answers for them.
But you don't care about the huge number of questions needed to make the response on target.
I just liked seeing the usually-untouchable hero called out on his completely empty boast of how tirelessly curious and inquiring he was.
That's not true. Logic doesn't protect you from GIGO (garbage-in-garbage-out). Actually knowing something about the subject one is interacting with is very important.

Most try to take a fixed time window (say one day, one week, etc.) and try to predict events.

To predict, find events that have certain occurrence but uncertain timing (say, the fragile will break) rather than certain timing but uncertain occurence.

Nassim Taleb

I don't really get this. It seems like both types of prediction matter quite a bit. The only way I can interpret it that makes sense to me is something like: Is he giving advice about making correct predictions given that you just randomly feel like predicting stuff? Or is he giving advice about how to predict things you actually care about?
The latter. Specifically predicting high impact events.

In 2014, marriage is still the best economic arrangement for raising a family, but in most other senses it is like adding shit mustard to a shit sandwich. If an alien came to earth and wanted to find a way to make two people that love each other change their minds, I think he would make them live in the same house and have to coordinate every minute of their lives.

Scott Adams

True or false, I'm trying but I really can't see how this is a rationality quote. It is simply a pithy and marginally funny statement about one topic. I think it's time to add one new rule to the list, right at the top: * All quotes should be on the subject of rationality, that is how we develop correct models of the world. Quotes should not be mere statements of fact or opinion, no matter how true, interesting, funny, or topical they may be. Quotes should teach people how to think, not what to believe. Can anyone say that in fewer words?
This is how: * it exposes the common fallacy that people who love each other should get married to make their relationship last * it uses the standard sunk-cost trap avoidance technique to make this fallacy evident The rest of the logic in the link I gave is even more interesting (and "rational"). Making one's point in a memorable way is a rationality technique. As for your rule, it appears to me so subjective as to be completely useless. For one where one sees "what to believe" another sees "how to think".
Assume for the sake of argument, the statement is correct. This quote does not expose a fallacy, that is an error in reasoning. There is nothing in this quote to indicate the rationality shortcoming that causes people to believe the incorrect statement. Rather this exposes an error of fact. The rationality question is why do people come to believe errors of fact and how we can avoid that. You may be reading the sunk cost fallacy into this quote, or it may be in an unquoted part of the original article, but I don't see it here. If the rest of the article better elucidates rationality techniques that led Adams to come to this conclusion, then likely the wrong extract from the article was selected to quote. Making one's point in a memorable (including humorous) way may be an instrumental rationality technique. That is, it helps to convince other people of your beliefs. However in my experience it is a very bad epistemic rationality technique. In particular it tends to overweight the opinions of people like Adams who are very talented at being funny, while underweighting the opinions of genuine experts in a field, who are somewhat dry and not nearly as amusing.
Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.
The point of the quote is that it tends to make it harder to stay in love. Which is the opposite of what people want when they get married.
What if he wanted to make them stay in love?
Then he would let them work out a custom solution free of societal expectations, I suspect. Besides, an average romantic relationship rarely survives more than a few years, unless both parties put a lot of effort into "making it work", and there is no reason beyond prevailing social mores (and economic benefits, of course) to make it last longer than it otherwise would.
Just to clarify, you figure the optimal relationship pattern (in the absence of societal expectations, economic benefits, and I guess childrearing) is serial monogamy? (Maybe the monogamy is assuming too much as well?)
Certainly serial monogamy works for many people, since this is the current default outside marriage. I would not call it "optimal", it seems more like a decent compromise, and it certainly does not work for everyone. My suspicion is that those happy in a life-long exclusive relationship are a minority, as are polyamorists and such. I expect domestic partnerships to slowly diverge from the legal and traditional definition of marriage. It does not have to be about just two people, about sex, or about child raising. If 3 single moms decide to live together until their kids grow up, or 5 college students share a house for the duration of their studies, they should be able to draw up a domestic partnership contract which qualifies them for the same assistance, tax breaks and next-of-kin rights married couples get. Of course, this is a long way away still.
To my mind, the giving of tax breaks etc. to married folks occurs because (rightly or wrongly) politicians have wanted to encourage marriage. I agree that in principle there is nothing wrong with 3 single moms or 5 college students forming some sort of domestic partnership contract, but why give them the tax breaks? Do college kids living with each other instead of separately create some sort of social benefit that "we" the people might want to encourage? Why not just treat this like any other contract? Apart from this, I think the social aspect of marriage is being neglected. Marriage for most people is not primarily about joint tax filing, but rather about publicly making a commitment to each other, and to their community, to follow certain norms in their relationship (e.g., monogamy; the specific norms vary by community). This is necessary because the community "thinks" pair bonding and childrearing are important/sacred/weighty things. In other words, "married" is a sort of honorific. Needless to say, society does not think 5 college students sharing a house is an important/sacred/weighty thing that needs to be honoured. This thick layer of social expectations is totally absent for the kind of arm's-length domestic partnership contract you propose, which makes me wonder why anybody would either want to call it marriage or frame it as being an alternative to marriage.
I don't think anyone suggested that? Some marriages are of convenience, and the honorific sense doesn't apply as well to people who don't fit the romantic ideal of marriage.
I could make exactly the same argument about divorce-able marriage and wonder why would anyone call this get-out-whenever-you-want-to arrangement "marriage" :-D The point is, the "thick layer of social expectations" is not immutable.
If traditional marriage is a sparrow, then marriage with no-fault divorce is a penguin, and 5 college kids sharing a house is a centipede. Type specimen, non-type specimen, wrong category. Social expectations are mutable, yes - what of it? Do you think it's desirable or inevitable that marriage just become a fancy historical legal term for income splitting on one's tax return? Do you think sharing a house in college is going to be, or ought to be, hallowed and encouraged?
Agreed, no fault divorce laws were a huge mistake.
From which point of view?
It reduces the demand for real estate, which lowers its price. Of course this is a pecuniary externality so the benefit to tenants is exactly counterbalanced by the harm to landlords, but given that landlords are usually much wealthier than tenants...
Yes and the social benefit is already captured by the roommates in the form of paying less rent.
I recommend reading the whole Scott Adams post from which the quote came. The quote makes little sense standing by itself, it makes more sense within its context.
The idea that marriage is purely about love is a recent one. Adams' lifestyle might work for a certain kind of wealthy high IQ rootless cosmopolitan but not for the other 95% of the world.
If this is a criticism, it's wide off the mark. Note his disclaimer about "the best economic arrangement". And he certainly speaks about the US only.
And it speaks volumes that he views it as an "economic arrangement", like he's channeling Bryan Caplan.
I don't understand. It looks to me as if Adams's whole point is that marriage isn't supposed to be primarily an economic arrangement, it's supposed to be an institution that provides couples with a stable context for loving one another, raising children, etc., but in fact (so he says) the only way in which it works well is economically, and in any other respect it's a failure. It's as if I wrote "Smith's new book makes a very good doorstop, but in all other respects I have to say it seems to me an abject failure". Would you say it speaks volumes that I view Smith's book as a doorstop? Surely my criticism only makes sense because I think a book is meant to be other things besides a doorstop.

"The spatial anomaly has interacted with the tachyonic radiation in the nebula, it's interfering with our sensors. It's impossible to get a reading."

"There's no time - we'll have to take the ship straight through it!"

"Captain, I advise against this course of action. I have calculated the odds against our surviving such an action at three thousand, seven hundred and forty-five to one."

"Damn the odds, we've got to try... wait a second. Where, exactly, did you get that number from?"

"I hardly think this is the time

... (read more)
Duplicate (May 2013). [http://lesswrong.com/lw/hbu/rationality_quotes_may_2013/8w8n]

Most people would die before they think. Most do.

-AC Grayling

[This comment is no longer endorsed by its author]Reply