Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
New Comment
447 comments, sorted by Click to highlight new comments since: Today at 9:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

If the real radical finds that having long hair sets up psychological barriers to communication and organization, he cuts his hair.

Saul Alinsky, in his Rules for Radicals.

This one hit home for me. Got a haircut yesterday. :P

If I could convince Aubrey de Grey to cut off his beard it would increase everyones expected longevity more than any other accomplishment I'm capable of.

If I could convince Aubrey de Grey to cut off his beard it would increase everyones expected longevity more than any other accomplishment I'm capable of.

This I'm not actually sure about. I think the guru look might be a net positive in his particular situation.

Agreed. His fundraising might be benefiting from a strategy that increases the variance of peoples' opinions of him even if it also lowers this mean.

His girlfriend, or one of his girlfriends (I'm not sure how many he had at the time) told me she thinks the beard is really hot.

There might be a bit of selection bias there.

I wasn't familiar with the name, so I looked it up. There are some pretty strong criticisms of him here: Looks like pseudoscience.

There no such thing as evidence-based decision on strategies for research funding. Nobody really knows good criteria for deciding which research should get grants to be carried out.

Aubrey de Grey among other things makes the argument that it's good to put out prices for research groups that get mices to a certain increased lifespan. That's the Methuselah Foundation’s Mprize.

Now the Methuselah Foundation worked to set up the new organ liver price that gives 1 million to the first team that creates a regenerative or bioengineered solution that keeps a large animal alive for 90 days without native liver function.

Funding that kind of research is useful whether or not certain arguments Aubrey de Grey made about “Whole Body Interdiction of Lengthening of Telomeres” are correct. In science there's room for people proposing ideas that turn out to be wrong.

The authors provide more arguments than ones about telomeres. Further, they charge that he's misrepresenting evidence systematically, not just making specific proposals that turn out to be wrong. I agree giving prizes for increasing the lifespan of mice is a good idea, but that's not a very strong reason to support him. Do you have examples of novel scientific ideas he's had that have turned out to be useful?

I agree giving prizes for increasing the lifespan of mice is a good idea, but that's not a very strong reason to support him.

Why exactly?

Do you have examples of novel scientific ideas he's had that have turned out to be useful?

The SENS website lists 42 published papers that were funded with SENS grant money. The foundation has a yearly budget of 4 million that it uses to award grants to science that's publishable. A lot of that money comes out of Grey's own pocket and Peter Thiel's pocket. Other money comes from private donations. It's mainly additional money for the subject that wouldn't be there without Aubrey de Grey activism.

Aubrey de Grey may very well represent a picture of aging that underestiamtes the difficulties. However the resulting effect is that now a company like Google did start a project with Calico that's speficially targeted on curing aging.

If you want to convince Silicon Valley's billionaires to pay for more anti-aging research Aubrey de Grey might simply be making the right moves when scientists who are more conservative about possible success can't convince donars to put up money.

Because most advances in mouse models don't carry over into humans.
While mouse model aren't perfect, they do produce new knowledge and you simply can't do some exploratory research in humans.
I should distinguish between "supporting him as an activist" and "supporting him as a legitimate scientific researcher". I think that the fact he provides prizes to others is a decent reason to support him in the first category but not a reason to support him in the second. Even if we collapse the two categories, the mice thing doesn't seem like enough to outweigh misrepresenting research to the public. Mostly, I was wondering whether you knew of any innovations or discoveries he found as a scientist. Because as the above link describes it, even if he has been a good activist he has been a poor scientist, not finding anything new and misleading people about the old. This sounds like Dark Arts, which would make it deserve the label pseudoscience. If your argument is that there's a legitimate place for "marketing" like that, I see your point but I'm reluctant to agree.
If his core impact would be by standing in the lab then his beard wouldn't matter. He did publish a paper with 36 citations in the last century but that's not where his main impact is. Dark arts would be if he wouldn't believe in his own ideas and just pretends to. I don't think that's true. If you would label all grant proposal that are misleading about the likely applicability of the research results to real world issues as pseudoscience I doubt that much science is left at the end. In a perfect world grant committies might hand out money based on evidence-based methods for handing out grant money. We don't live in that world. In our world grant committies might not be better than monkey's that pick randomly. But as long as the funded research at least produces publishable papers that replicate, that's fine. In the current state of academic biology replicability itself is even a pretty high standard.
I have seen him speak a couple of times and he addressed many of these criticisms in the talks. You might want to read his response to these criticisms before assuming they are valid. A lot of this comes from a lack of appreciation of the difference between science and engineering. In engineering you just have to find something that works. You don't need to understand everything. Some debate here and you can easily find his talks online: In his talks I did not get the sense that he is positioning himself as a great misunderstood maverick. He does say that in his opinion much ageing research is unproductive because it is aimed at understanding the problem rather than fixing it. For example, rather than tweak metabolic processes to produce slightly smaller amounts of toxic substances, remove those substances by various means, or replace the cells grown old from said toxic substances. His solution to cancer is to remove the telomerase genes. This way cancer cells will die after X divisions. Of course this creates the problem that stem cells will not work. So we will need to replenish germ lines in the immune system, stomach walls, skin etc. These are "dumb" strategies and rarely of interest to scientists perhaps for that reason. There is a similar issue in nanotechology discussed in Drexler's book "Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization". For example you do not need to solve the protein folding problem in generality in order to design proteins that have specific shapes. You just need to find a set of patterns that allow you to build proteins of specific shapes. (edited for typos)
I will definitely be going over this, it looks very helpful. Thank you for making this.
I haven't finished the document yet, but I noticed it keeps on using the word "unscientific", which sounds problematic as one of its aims is to define pseudoscience.
? They explicitly say that there is no rigid definition distinguishing pseudoscience from legitimate science. They claim that in order to distinguish between them it's necessary to point at specific instances of misleading behaviors, and they enumerate these behaviors at the very beginning of the paper.
But in that list of problems, they keep on saying "Unscientifically simplified, Unscientifically claimed, etc", which is a problem unless they define science. They clearly haven't learned how to taboo words like science, which shows here.
Tabooing words is a tool, not a mandatory exercise. They weren't relying on the word "unscientifically" to do the work for them. For example, here is the first instance of the word I spotted upon looking at the article again: It seems clear that they're not relying on the word in an inappropriate way. Tabooing is useful sometimes, but requiring others to taboo any subject of conversation is not productive and adds an unnecessary mechanism for biases to influence us.
The particular use you quote looks justified. I was referring to this, from earlier: where it looked like anything they didn't like could be included under the unscientific category.
This seems bad to me and unscientific sounds like a fair label for such practices. I don't know why you disagree. Admittedly this usage is confusing. But judging from the arguments made elsewhere in the paper, they seem to be saying there's no good evidence suggesting these specific therapies will work. A lot of what he does seems to be highly speculative. Calling speculation unscientific seems fair to me, science is about going out and looking at the world, then creating ideas in response to what you observe.
I think his look (if his Wikipedia picture and the tiny images from a Google search) is probably not particularly harmful. It's well-positioned to signal Dignified Hippy, which is a group that tends to be skeptical of the general anti-deathist position, or for a general Respectable Elder, which is not wonderful but pretty decent for appealing to institutional-investor-type groups. I'm not familiar enough with his particular relevance to know whether that balance could be improved for what he actually does.

And you end up like this.

Seems to have worked for them.

For whom? For the Mormon Church or for the specific individuals? :-/
As far as achieving "communication and organization", probably both?
I̶t̶ ̶a̶p̶p̶e̶a̶r̶s̶ ̶m̶o̶s̶t̶l̶y̶ ̶t̶h̶e̶ ̶c̶h̶u̶r̶c̶h̶,̶ ̶a̶p̶p̶a̶r̶e̶n̶t̶l̶y̶.̶ ̶W̶o̶w̶.̶ Edit -- is latter day saints. One of their quotes was by a Jehovah's Witness, so I thought this was a guide for Jehovah's Witnesses. If the question is "Does it work for the specific individuals in the Mormon Church?" the answer is yes.
Jehovah's Witnesses != Mormons, even though both are known for door-to-door solicitation. Reliable statistics are thin on the ground, but the Mormons seem to be doing a little better than average in terms of personal socioeconomic status. (BYU is not, however, an unbiased source.)
You are correct. I'm not sure where I got the idea that LDS was Jehovah's Witnesses.
I don't think it's money we're talking about here.
Money is what the link I was replying to was talking about.
How'd you manage to strikethrough part of your post? I thought the markup for that had been disabled.
"hello, world".replace(/(.)/g, '\u0336$1') == "h̶e̶l̶l̶o̶,̶ ̶w̶o̶r̶l̶d̶"
And a thousand female metalheads shall weep.
it's fun to contemplate alternative methods for avoiding/removing these barriers

The race is not always to the swift, nor the battle to the strong, but that's the way to bet.

-Damon Runyon

Damon Runyon clearly has not considered point spreads.

If it's stupid and it works, it's not stupid.

"Murphy's Laws of Combat"

If it's stupid and it works, it's not stupid.

This is what survivorship bias looks like from the inside.

the map is not the territory. if it's stupid and it works, update your map.
One of my former fencing instructors had this as a sort of catchphrase. Needless to say, he was a pretty cool guy.
I can divorce my wife by beating her to death. Things can work, but that doesn't stop them from being stupid.

That hypothetical action doesn't "work" in the sense of helping you accomplish all relevant goals, among which, I assume, is the desire to not be incarcerated. (It is also obviously highly immoral.) Put another way, if you define "work" to include something very bad happening to you, that's just "stupid."

I would wager most people who say the above quote in defense of their actions are doing something that only "works" in the sense of accomplishing one specific goal at the expense of others.

When you hear an economist on TV "explain" the decline in stock prices by citing a slump in the market (and I have heard this pseudo-explanation more than once) it is time to turn off the television.

Thomas J. McKay, Reasons, Explanations and Decisions

I guess technically if a lot of stocks went paid their dividend on the same day (went ex-divvie) you could get a 0.5-1% fall in the stock prices (depending on the dividend yield at the time) without their being a slump - the value of those dividends which have now been paid out is simply no longer part of the market. But I agree wholeheartedly with the sentiment.
I don't know if I agree with this. Suppose the stock market is driven by runaway herd behavior. If that's the case, then an inexplicably bad random perturbation might have cascading effects. Saying that the initial slump in the market is driving further decline seems accurate to me.
That would be a slump in the market caused by a decline in stock prices :)
I don't understand what you're trying to say. As used in the original quote they are interchangeable synonyms.
I was poking fun at that.

All the logical work (if not all the rhetorical work) in “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” is being done by the decision about what aspects of liberty are essential, and how much safety is at stake. The slogan might work as a reminder not to make foolish tradeoffs, but the real difficulty is in deciding which tradeoffs are wise and which are foolish. Once we figure that out, we don’t need the slogan to remind us; before we figure it out, the slogan doesn’t really help us.

--Eugene Volokh, "Liberty, safety, and Benjamin Franklin"

A good example of the risk of reading too much into slogans that are basically just applause lights. Also reminds me of "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which."

I mostly agree, but I think the slogan (like, I think, many others about which similar things could be said) has some value none the less.

A logically correct but uninspiring version would go like this:

It is a common human failing to pay too much attention to safety and not enough to liberty. As a result, we (individually and corporately) will often be tempted to give up liberty in the name of safety, and in many such cases this will be a really bad tradeoff. So don't do that.

-- Not Benjamin Franklin

Franklin's slogan serves as a sort of reminder that (1) there is a frequent temptation to "give up essential Liberty, to purchase a little temporary Safety" and (2) this is likely a bad idea. Indeed, the actual work of figuring out when the slogan is appropriate still needs to be done, but the reminder can still be useful. And (3) because it's a Famous Saying of a Famous Historical Figure, one can fairly safely draw attention to it and maybe even be taken seriously, even in times when the powers that be are trying to portray any refusal to be terrorized as unpatriotic.

Of course Volokh is aware of the "reminder" function (as he says: "The slogan might work as a reminder") but I think he undervalues it. (He says the "real difficulty" is deciding which tradeoffs to make, but actually just noticing that there's an important tradeoff being proposed is often a real difficulty.) And, alas, its Famous Saying nature is pretty important too.

It strikes me that the original Franklin quote really identifies a specific case of the availability heuristic. That is, when you're focused on safety, you tend to adopt policies that increase safety, without even considering other values such as liberty. There may also be an issue of externalities here. This is really, really common in law enforcement. For example, consider civil asset forfeiture. It is an additional legal tool that enables police to catch and punish more criminals, more easily. That it also harms a lot of innocent people is simply not considered because their is no penalty to the police for doing so. All the cost is borne by people who are irrelevant to them.
The quote always annoyed me too. People bring it up for ANY infringement on liberty, often leaving off the words "Essential" and "Temporary", making a much stronger version of the quote (And of course, obviously wrong). Tangentially, Sword of Good was my introduction to Yudkowsky, and by extension, LW.

Problem is, "Fucking up when presented with surprising new situations" is actually a chronic human behavior. It's why purse snatchers are so effective -- by the time someone registers Wait, did somebody just yank my purse off my shoulder?, the snatcher is long gone.

-- Ferrett Steinmetz

But is it only a human behavior? I'd think anything with cached thoughts/results/computations would be similarly vulnerable.
4mako yass9y
That's true of most frequently referenced elements of human nature, if not all of them. Even Love. ~The Homo Sapiens Class has a trusted computing override that enables it to lock itself into a state of heightened agreeability towards a particular target unit. More to the point: it can signal this shift in modes in a way that is both recognizable to other units, and which the implementation makes very difficult for it to forge. The Love feature then provides HS units on either side of a reciprocated Love signalling a means of safely cooperating in extremely high-stakes PD scenarios without violating their superrationality circumvention architecture. Hmm.. On reflection, one would hope that most effective designs for time-constrained intelligent(decentralized, replication-obsessed) agents would not override superrationality("override": Is it reasonable to talk about it like a natural consequence of intelligence?), and that, then, the love override may not occur. Hard to say.

Adulthood isn't an award they'll give you for being a good child. You can waste... years, trying to get someone to give that respect to you, as though it were a sort of promotion or raise in pay. If only you do enough, if only you are good enough. No. You have to just... take it. Give it to yourself, I suppose. Say, I'm sorry you feel like that and walk away. But that's hard.

Lois McMaster Bujold

The less you care about "the respect" others show towards you, the less power idiots can exert over you. The trick is differentiating whose opinion actually matters (say, in a professional context) and whose does not (say, your neighbors'). Due to being social animals, we're prone to rationalize caring about what anyone thinks of us (say, strangers in a supermarket when your kid is having a tantrum -- "they must think I'm a terrible mom!" -- or in the neighbors case "who knows, I might one day need to rely on them, better put some effort into fitting in"). Only very few people's opinions actually impact you in a tangible / not-just-social-posturing way. (The standard answer on /r/relationships should be "why do you care about what those idiots think, even in the unlikely case they actually want to help your situation, as opposed to reinforcing their make-believe fool's paradise travesty of a world view".) Interestingly, internalizing such a IDGAF attitude usually does a good job at signalling high status, in most settings. Sigh, damned if you do and damned if you don't.
I don't think this is generally true. Do you mean: "The less you care about "the respect" idiots show towards you, the less power idiots can exert over you."??
Calling my statement A, and yours B, both are true. A is probabilistically true (i.e., in most cases) iff the majority of people are idiots (and assuming a normal distribution of "impact someone can have on you"), B is 'strictly' true, well as far as strictly holds in social dynamics. If you are a really good idiot oracle, i.e. if you're adept at quickly discerning someone's idiot attribute (or the lack thereof), you should follow B (which is a subset of A , "forall X ..." versus "forall X where P(X)"). If you're not, you should follow A, excepting special cases and, as mentioned, actually undesirable consequences (e.g. professional). For example, there are select people on LW whose approval I covet. So I'm not stringently following A (it's hard to follow one's own advice anyways), but I suppose I'm closer to A than to B, which gives me a better worst-case-scenario in terms of "power idiots exert over you".
Given that people aren't really good idiot oracles, and in particular that if you care about the respect other show you in general then on some level you will also often be bothered by disrespect from idiots, I think A can very well be true even if most people aren't idiots.
Yes, I often feel that proposed optimal solutions disregard the feeble nature of the human mind. Solving obesity is a trivial program, just control your food intake. One-step-algorithm. Trivial, that is, unless you're a human, in which case it's practically infeasable for most. Ignoring our human, ahem, let's call them "quirks", when devising solutions is a classic failure mode which transforms supposedly "optimal" solutions into suboptimal or even actively harmful ones. I'd cite socialism as an example, but I just got out of that rabbit hole like 5 comments ago and have no desire to leave Kansas for now (metaphorically speaking).
I am having trouble understanding the message here... and consequently how this is a good rationality quote. Is this trying to say "don't bother trying to please people in childhood"? Is it "don't bother trying to earn respect as an adult"? Both are poor advice, in general, IMO.

I think it means something more like, "don't expect the behaviors that pleased adults when you were a child, to get you anywhere as an adult. Children are considered pleasing when they're submissive and dependent, but adults are respected for pleasing themselves first."

The rationality connection is, well, winning.

Your mind is like a compiled program you've lost the source of. It works, but you don't know why.

Paul Graham

The situation is far worse than that. At least a compiled program you can: add more memory or run it on a faster computer, disassemble the code and see at which step things go wrong, rewind if there's a problem, interface with programs you've written, etc. If compiled programs really were that bad, hackers would have already won (as security researchers wouldn't be able to take apart malware), drm would work, no emulators for undocumented devices would exist. The state of the mind is many orders of magnitude worse. Also, I'd quibble with "we don't know why". The word I'd use is how. We know why, perhaps not in detail (although we sort of know how, in even less detail.)
i largely agree in context, but i think it's not an entirely accurate picture of reality. there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc... sure, the methods are a lot less reliable than what we have available for most simple computer programs. also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you'd expect. in theory you should be able to debug large, complex, computing systems - and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain. try, for example, comparing success rates/timelines/etc... for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we'd like to admit them to be - but i wouldn't be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
But startups seem to do that pretty routinely. One does not hear about the 'Dodo bird verdict' for startups trying to scale. Startups fail for many reasons, but I'm having a hard time thinking of any, ever, for which the explanation was insurmountable performance problems caused by scaling. (Wait, I can think of one: Friendster's demise is usually blamed on the social network being so slow due to perpetual performance problems. On the other hand, I can probably go through the last few months of Hacker News and find a number of post-mortems blaming business factors, a platform screwing them over, bad leadership, lack of investment at key points, people just plain not liking their product...)
in retrospect, that's a highly in-field specific bit of information and difficult to obtain without significant exposure - it's probably a bad example. for context: friendster failed at 100m+ users - that's several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else). there's a selection effect for startups, at least the ones i've seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact - the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup. i'd expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual - maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.
Ah.. a compiled program running on limited computing resources(memory, cpu etc..). I kinda think the metaphor assumes that implicitly. Perhaps it results in a leaky abstraction for most others(i.e: not working with computers), but i don't really see it as a problem. Agree 'how' is more accurate than why.
"Why" usually resolves to "how" (if not always (in the physical world), with one notable exception).
eventually the truth/reality/answer is indifferent to the phrasing of the question (as why/how). I do think phrasing it as how makes it easier to answer(in the instrumental sense) than why. Also what is the exception, am not aware of it, please point me.
"Why are you in the hospital?" - "Because I was injured when a car hit me." "Why did the car hit you?" - "Because the driver was drunk and I was standing at the intersection." "Why was the driver drunk?" and "Why were you standing at the intersection?" and so on and so forth. Every "why" question about something occurring in the natural world is answered by going one (or more) levels down in the granularity, describing one high-level phenomenon via its components, typically lower-level phenomena. This isn't unlike deriving one corollary from another. You're climbing back* the derivation tree towards the axioms, so to speak. It's the same in any system, the math analogy would be if someone asked you "why does this corollary hold", which you'd answer by tracing it back to the nearest theorem. Then "why does this theorem hold" would be answered by describing its lower-level* lemmata. Back we go, ever towards the axioms. All these are more aptly described as "how"-questions, "how" is the scientific question, since what we're doing is finding descriptions, not reasons, in some sense. Of course you could just solve such distinctions via dictionary and then in daily usage use "why" and "how" interchangeably, which is fine. But it's illuminating to notice the underlying logic. Which leaves as the only truly distinct "why"-question the "why those axioms?", which in the real world is typically phrased as "why anything at all?". Krauss tries to reduce that to a "how" question in A Universe From Nothing, as does the Tegmark multiverse, which doesn't work except snuggling in one more descriptive layer in front of the axioms. There is a good case to be made that this one remaining true "why"-question, which does not reduce to merely some one-level-lower description, is actually ill-formed and doesn't make sense. The territory just provides us with evidence, the model we build to compress that evidence implicitly surmises the existence of underlying axioms in the territory
Am Douglas Adams on this one. 42 is the answer, we don't know the question. Seriously, though I've gotten to a stage where I don't wonder much about the one 'why' axiom anymore*. Thanks for the clarification though. *-- Used to wonder some 10 years ago though.

"It’s much better to live in a place like Switzerland where the problems are complex and the solutions are unclear, rather than North Korea where the problems are simple and the solutions are straightforward."

Scott Sumner, A time for nuance

The problems in North Korea are not so simple with straightforward solutions, when we look at them from the perspective of the actors involved.

For the average citizen in North Korea, there are no clear avenues to political influence that don't increase rather than decrease personal risk. For the people in North Korea who do have significant political influence, from a self-serving perspective, there are no "problems" with how North Korea is run.

North Korea's problems might be simple to solve from the perspective of an altruistic Supreme Leader, but they're hard as coordination problems. Some of our societal problems in the developed world are also simple from the perspective of an altruistic Supreme Leader, but hard as coordination problems. Some of the more salient differences are that those problems didn't occur due to the actions of non altruistic or incompetent Supreme Leaders in the first place, and aren't causing mass subsistence level poverty.

I do think North Korea leaders would prefer a state of affairs where it could educate it's own elite instead of sending the kids to Switzerland to get a real education. North Korea's military would like to have capable engineers that can produce working technology. On the other hand a simple act like giving the population access to internet might produce a chain reaction that blows up the whole state. Jang Sung-taek was someone in North Korea with a lot of political power. According to Wikipedia South Korean believed that Jang Sung-taek was the defacto leader of North Korea in 2008. Last year the North Korean state television announced his execution. His extended family might also have gotten executed. One of the charges was that he "made no scruple of committing such act of treachery in May last as selling off the land of the Rason economic and trade zone to a foreign country..." It's worth noting that Western countries did engage in policies to block Jang Sung-taek efforts to create economic change in North Korea.
That simply means that Switzerland has already solved the easier problems North Korea struggles with. To paraphrase, an absence of low-hanging fruit on a well-tended tree means you're probably in a garden.

Isn't that the point of the quote?

Maybe, but if so the quote is ineffective at conveying it.

there is a familiar phenomenon here, in which a certain kind of would-be economic expert loves to cite the supposed lessons of economic experiences that are in the distant past, and where we actually have only a faint grasp of what really happened. Harding 1921 “works” only because people don’t know much about it; you have to navigate through some fairly obscure sources to figure out [what actually happened]. And the same goes even more strongly — let’s say, XII times as strongly — when, say, [Name] starts telling us about the Emperor Diocletian. The point is that the vagueness of the information, and even more so what most people [think they] know about it, lets such people project their prejudices onto the past and then claim that they’re discussing the lessons of experience.

Paul Krugman on the use of examples to obscure rather than clarify

What's the alternative. Site what's currently going on in other countries (people generally aren't to familiar with that either)? Generalize from one example (where people don't necessarily now all the details either)?

Yes. Because both of those have actual data, and are thus useful - your reasoning can be tested against reality.

We just really don't know very much about the roman economy, and are unlikely to find out much more than we currently do. Generalizing from one example isn't good .. science, logic or argument. But it's better than generalizing from the fog of history. Not a lot better - Economics only very barely qualifies as a science on a good day, but Krugman is completely correct to call people out for going in this direction because doing so just outright reduces it to storytelling.

We just really don't know very much about the roman economy, and are unlikely to find out much more than we currently do.

On the other hand we do know a lot about what happened in 1921, Krugman just wishes we didn't because it appears to contradict his theories.

Generalizing from one example isn't good .. science, logic or argument. But it's better than generalizing from the fog of history.

Um, no. History contains evidence, it's not particularly clean evidence, but evidence nonetheless and we shouldn't be throwing it away.

Geographers crowd into the edges of their maps parts of the world which they do not know about, adding notes in the margin to the effect that beyond this lies nothing but sandy deserts full of wild beasts, and unapproachable bogs.

Plutarch, from Life of Theseus.

This makes me think that some of this practice might have been motivated by professional pride on the part of the mapmakers. Such as, "oh, the only reason I didn't go farther was because of the ravenous beasts, and my rival would never be able to push the boundaries farther either so you might as well buy/trust in my mapmaking"

You may be right, but I'm also inclined to include that it's fun to draw monsters.

I know that all revolutions must have ideologies to spur them on. That in the heat of conflict these ideologies tend to be smelted into rigid dogmas claiming exclusive possession of the truth, and the keys to paradise, is tragic. Dogma is the enemy of human freedom. Dogma must be watched for and apprehended at every turn and twist of the revolutionary movement. The human spirit glows from that small inner light of doubt whether we are right, while those who believe with complete certainty that they possess the right are dark inside and darken the world outside with cruelty, pain, and injustice.

Saul Alinsky, in his Rules for Radicals.

It is wrong to put temptation in the path of any nation, For fear they should succumb and go astray; So when you are requested to pay up or be molested, You will find it better policy to say: --

"We never pay any-one Dane-geld, No matter how trifling the cost; For the end of that game is oppression and shame, And the nation that pays it is lost!"

--Rudyard Kipling, "Dane-Geld"

A nice reminder about the value of one-boxing, especially in light of current events.

You don't see that last link as a publicity stunt? I tentatively suspect that it is - though maybe I should put that under 50% - with a lot of the remaining probability going to blackmail of some individual(s).

Rationalizations are more important than sex... Have you ever gone a week without a rationalization?

  • Jeff Goldblum's character in The Big Chill
Frequency is not importance. I think this quote has more humorous than practical merit.

But frequency can be strong evidence of importance.

I suspect many people would experience significant psychological trauma if they were unable to rationalize for a week.

Yes. But probably not above the importance of sex... Interesting. This suggests a method or measure of the importance of compartmentalization. Maybe rationalization is even neccessary for dealing rationally with real life (the word kind of gives it away). Could it be that is needed (in one way or the other) for AI to work in the face of uncertainty?
Only in the sense that lying can be called "truthization".
I read that. I agree with the argument. But it doesn't really address my intuition behind my argument. The idea is that you have concurrent processes creating partial models of partial but overlapping aspects of reality. These models a) help making predictions for each aspect (descriptively), b) may help acting in the context of the aspect (operational/prescriptively) and c) may be on the symbolic layer inconsistent. Do you want to kick out all the benefits to gain consistency? It could be that you can't achieve consistency of overlapping models at all without some super all encompassing model. Or it could be that such a super-model is horribly big and slow.
If we're going to be building a Seed AI, I really don't think a good design would involve the AI reasoning using multiple, partially overlapping, possibly inconsistent models, especially since I'm not sure how the AI would go about updating those models if it made contradictory observations. For example, upon receiving contradictory evidence, which of its models would it update? One? Two? All of them? If you decide to work with ad hoc hypotheses that contradict not only reality, but each other, just because it's useful to do so, the price you pay is throwing the entire idea of updating out the window. If it's uncertainty you're concerned about, you don't need to go to the trouble of having multiple models; good old Bayesian reasoning is designed to deal with uncertainties in reasoning--no overlapping models required. Moreover, I have a difficult time believing that a sufficiently intelligent AI would face much of an issue with regard to processing speed or memory capacity; if anything, working with multiple models might actually take longer in some situations, e.g. when dealing with a scenario in which several different models could apply. In short, the "super all encompassing model" would seem to work just fine.
Bayesianism works well with known unknowns. But it doesn't work any better than any other system else with unknown unknowns. I would say that while Bayesian reasoning can deal well with risk, it's not great with uncertainty - that's not to say uncertainty invalidates Bayesianism, only to say that Bayesianism is not so spectacularly strong it is able to overwhelm such fundamental difficulties of epistemology. To my mind, using multiple models of reality is more or less essential. My reasons for thinking this are difficult to articulate because they're mired in deep intuitions of mine I don't understand very well, but an analogy might help somewhat. Think of the universe's workings as a large and enormously complicated jigsaw puzzle. At least for human beings, when trying to solve a jigsaw puzzle, focusing exclusively on the overall picture and how each individual puzzle piece integrates into it is an inefficient process. You're better off thinking of the puzzle as several separate puzzles instead, and working with clusters of pieces. By doing this, you'll make mistakes - one of your clusters might actually be upside down or sideways, in a way that won't be consistent with the overall picture's orientation. However, this drawback can be countered as long as you don't look at the puzzle exclusively in terms of the individual clustered pieces. A mixed view is best. Maybe a sufficiently advanced AI would be able to most efficiently sort through the puzzle of the universe in a more rigid manner. But IMO, what evidence we currently have about intelligence suggests the opposite. AI that's worthy of the name will probably heuristically optimize on multiple levels at once, as that capability's one of the greatest strengths machine-learning has so far offered us.
Your points are valid. But the question remains whether a pure approach is efficient enought to work at all. Once it does it could scale as it sees fit.

Where you are going to spend your time and your energy is one of the most important decisions you get to make in life.

Jeff Bezos

“They, instead, commit the fundamental attribution error, which is if something good happens, it’s because I’m a genius. If something bad happens, it’s because someone’s an idiot or I didn’t get the resources or the market moved. … What we’ve seen is that the people who are the most successful here, who we want to hire, will have a fierce position. They’ll argue like hell. They’ll be zealots about their point of view. But then you say, ‘here’s a new fact,’ and they’ll go, ‘Oh, well, that changes things; you’re right.’”

Wouldn't something good happening correctly result in a Bayseian update on the probability that you are a genius, and something bad a Bayseian update on the probability that someone is an idiot? (perhaps even you)
Yes, but if something good happens you have to update on the probability that someone besides you is a genius, and if something bad happens you have to update on the probability that you're the idiot. The problem is people only update the parts that make them look better.
Yes, but the issue is whether or not those are the dominant hypotheses that come to mind. It's better to see success and failure as results of plans and facts than innate ability or disability.
Not without a causal link, the absence of which is conspicuous.
Not necessarily. Causation might not be present, true, but causation is not necessary for correlation, and statistical correlation is what Bayes is all about. Correlation often implies causation, and even when it doesn't, it should still be respected as a real statistical phenomenon. All Jiro's update would require is that P(success|genius) > P(success|~genius), which I don't think is too hard to grant. It might not update enough to make the hypothesis the dominant hypothesis, true, but the update definitely occurs.
"Because" (in the original quote) is about causality. Your inequality implies nothing causal without a lot of assumptions. I don't understand what your setup is for increasing belief about a causal link based on an observed correlation (not saying it is impossible, but I think it would be helpful to be precise here). Jiro's comment is correct but a non-sequitur because he was (correctly) pointing out there is a dependence between success and genius that you can exploit to update. But that is not what the original quote was talking about at all, it was talking about an incorrect, self-serving assignment of a causal link in a complicated situation.
Yes, naturally. I suppose I should have made myself a little clearer there; I was not making any reference to the original quote, but rather to Jiro's comment, which makes no mention of causation, only Bayesian updates. Because P(causation|correlation) > P(causation|~correlation). That is, it's more likely that a causal link exists if you see a correlation than if you don't see a correlation. As for your second paragraph, Jiro himself/herself has come to clarify, so I don't think it's necessary (for me) to continue that particular discussion.
Where are you getting this? What are the numerical values of those probabilities? You can have presence or absence of a correlation between A and B, coexisting with presence or absence of a causal arrow between A and B. All four combinations occur in ordinary, everyday phenomena. I cannot see how to define, let alone measure, probabilities P(causation|correlation) and P(causation|~correlation) over all possible phenomena. I also don't know what distinction you intend in other comments in this thread between "correlation" and "real correlation". This is what I understand by "correlation", and there is nothing I would contrast with this and call "real correlation".
Do you think it is literally equally likely that causation exists if you observe a correlation, and if you don't? That observing the presence or absence of a correlation should not change your probability estimate of a causal link at all? If not, then you acknowledge that P(causation|correlation) != P(causation|~correlation). Then it's just a question of which probability is greater. I assert that, intuitively, the former seems likely to be greater. By "real correlation" I mean a correlation that is not simply an artifact of your statistical analysis, but is actually "present in the data", so to speak. Let me know if you still find this unclear. (For some examples of "unreal" correlations, take a look here.)
I think I have no way of assigning numbers to the quantities P(causation|correlation) and P(causation|~correlation) assessed over all examples of pairs of variables. If you do, tell me what numbers you get. I asked why and you have said "intuition", which means that you don't know why. My belief is different, but I also know why I hold it. Leaping from correlation to causation is never justified without reasons other than the correlation itself, reasons specific to the particular quantities being studied. Examples such as the one you just linked to illustrate why. There is no end of correlations that exist without a causal arrow between the two quantities. Merely observing a correlation tells you nothing about whether such an arrow exists. For what it's worth, I believe that is in accordance with the views of statisticians generally. If you want to overturn basic knowledge in statistics, you will need a lot more than a pronouncement of your intuition. A correlation (or any other measure of statistical dependence) is something computed from the data. There is no such thing as a correlation not "present in the data". What I think you mean by a "real correlation" seems to be an actual causal link, but that reduces your claim that "real correlation" implies causation to a tautology. What observations would you undertake to determine whether a correlation is, in your terms, a "real" correlation?
My original question was whether you think the probabilities are equal. This reply does not appear to address that question. Even if you have no way of assigning numbers, that does not imply that the three possibilities (>, =, <) are equally likely. Let's say we somehow did find those probabilities. Would you be willing to say, right now, that they would turn out to be equal (with high probability)? Okay, here's my reasoning (which I thought was intuitively obvious, hence the talk of "intuition", but illusion of transparency, I guess): The presence of a correlation between two variables means (among other things) that those two variables are statistically dependent. There are many ways for variables to be dependent, one of which is causation. When you observe that a correlation is present, you are effectively eliminating the possibility that the variables are independent. With this possibility gone, the remaining possibilities must increase in probability mass, i.e. become more likely, if we still want the total to sum to 1. This includes the possibility of causation. Thus, the probability of some causal link existing is higher after we observe a correlation than before: P(causation|correlation) > P(causation|~correlation). If you are using a flawed or unsuitable analysis method, it is very possible for you to (seemingly) get a correlation when in fact no such correlation exists. An example of such a flawed method may be found here, where a correlation is found between ratios of quantities despite those quantities being statistically independent, thus giving the false impression that a correlation is present when it is actually not. As I suggested in my reply to Lumifer, redundancy helps.
Sorry it's taken me so long to get back to this. The illusion of transparency applies not only to explaining things to other people, but to explaining things to oneself. The argument still does not work. Statistical independence does not imply causal independence. In causal reasoning the idea that it does is called the assumption or axiom of faithfulness, and there are at least two reasons why it may fail. Firstly, the finiteness of sample sizes mean that observations can never prove statistical independence, only put likely upper bounds on its magnitude. As Andrew Gelman has put it, with enough data, nothing in independent. Secondly, dynamical systems and systems of cyclic causation are capable of producing robust statistical independence of variables that are directly causally related. There may be reasons for expecting faithfulness to hold in a specific situation, but it cannot be regarded as a physical law true always and everywhere. Even when faithfulness does hold, statistical dependence tells you only that either causation or selection is happening somewhere. If your observations are selected on a common effect of the two variables, you may observe correlation when the variables are causally independent. If you have reason to think that selection is absent, you still have to decide whether you are looking at one variable causing the other, both being effects of common causes, or a combination. Given all of these complications, which in a real application of statistics you would have to have thought about before even collecting any data, the argument that correlation is evidence for causation, in the absence of any other information about the variables, has no role to play. The supposed conclusion that P(causation|correlation) > P(causation|~correlation) is useless unless there is reason to think that the difference in probabilities is substantial, which is something you have not addressed, and which would require coming up with something like actual value
How will you be able to distinguish between the two? You also seem to be using the word "correlation" to mean "any kind of relationship or dependency" which is not what it normally means.
Redundancy helps. Use multiple analysis methods, show someone else your results, etc. If everything turns out the way it's supposed to, then that's strong evidence that the correlation is "real". EDIT: It appears I've been ninja'd. Yes, I am not using the term "correlation" in the technical sense, but in the colloquial sense of "any dependency". Sorry if that's been making things unclear.
I still don't understand in which sense do you use the word "real" in 'correlation is "real"'. Let's say you have two time series 100 data points in length each. You calculate their correlation, say, Pearson's correlation. It's a number. In which sense can that number be "real" or "not real"? Do you implicitly have in mind the sampling theory where what you observe is a sample estimate and what's "real" is the true parameter of the unobserved underlying process? In this case there is a very large body of research that mostly goes by the name of "frequentist statistics" about figuring out what does your sample estimate tell you about the unobserved true value (to call which "real" is a bit of stretch since normally it is not real).
It seems as though my attempts to define my term intensionally aren't working, so I'll try and give an extensional definition instead: An example would be that site you linked earlier. Those quantities appear to be correlated, but the correlations are not "real".
So you are using "real" in the sense of "matching my current ideas of what's likely". I think this approach is likely to... lead to problems.
Er... no. Okay, look, here's the definition I provided from an earlier comment: You seemed to understand this well enough to engage with it, even going so far as to ask me how I would distinguish between the two (answer: redundancy), but now you're saying that I'm using "real" to mean "matching my current ideas of what's likely"? If there's something in the quote that you don't understand, please feel free to ask, but right now I'm feeling a bit bewildered by the fact that you seem to have entirely forgotten that definition. See also: spurious correlation.
Sigh. All measured correlations are "actually present in the data". If you take two data series and calculate their correlation it would be a number. This measured (or sample) correlation is certainly real and not fake. The question is what does it represent. You claim the ability to decide -- on a completely unclear to me basis -- that sometimes this measured correlation represents something (and then you call it "real") and sometimes it represents nothing (and then you call it "not real"). "Redundancy" is not an adequate answer because all it means is that you will re-measure your sample again and, not surprisingly, will get similar results because it's still the same data. As an example of "not real" correlation you offered the graphs from the linked page, but I see no reason for you to declare them "not real" other than because it does not look likely to you.
Depending on which statistical method you use, the number you calculate may not be the number you're looking for, or the number you'd have gotten had you used some other method. If you don't like my use of the word "real" to denote this, feel free to substitute some other word--"representative", maybe. By "redundancy" I'm not referring to the act of analyzing the data multiple times; I'm referring to using multiple methods to do so and seeing if you get the same result each time (possibly checking with a friend or two in the process). No, I am declaring them "not real" because they were calculated using a statistical method widely regarded as suspect. This suspect method is known to produce correlations that are called "spurious", and my link in the grandparent comment was to this method's Wikipedia page. I'm not sure if you thought the link I provided led to the original page you linked, but as you made no mention of "spurious correlations" (the method, not the page), I thought I'd mention it again.
The quote about causality is a characterization of an opponent's view. I was suggesting that the quote's author may have mischaracterized his opponent's view by interpreting a Bayseian update as an assertion of causality.
No, I don't think so at all. Bayes is about updating your estimates on the basis of new data points. You are not required to be stupid about it.
At a cursory glance, that site you linked does not appear to give any information on how it's generating those correlations, but the term "spurious correlation" actually has a specific meaning. Essentially, one can make even statistically uncorrelated variables appear to be correlated by introducing a third variable and taking the respective ratios and finding those to be correlated instead. It should go without saying that you should make sure your correlations are actual correlations rather than mere artifacts of your analysis method. As it is, the first thing I'd do is question the validity of those correlations. However, if the correlations actually are real, then I'd argue that they actually do constitute Bayesian evidence. The problem is that said evidence will likely be "drowned out" in a sea of much more convincing evidence. That being said, the evidence still exists; you just happen to also be updating on other pieces of evidence, potentially much more convincing evidence. So "You are not required to be stupid about it" is just the observation that you should take into account other forms of evidence when performing a Bayesian update, specifically (in this case) the plausibility of the claim (because plausibility correlates semi-strongly with truth). And to that I have but one thing to say: duh! Bayes is definitely about statistical correlation. You can call it "updating your estimates on the basis of new data points" if you want, but it's still all probabilities--and you need correlations for those. For example: if you don't know how much phenomenon A correlates with phenomenon B, how are you supposed to calculate the conditional probabilities P(A|B) and P(B|A)?
No, I strongly disagree. I do not need correlations for probabilities -- where did you get that strange idea? To make a simple observation, "correlation" is a linear relationship and there are many things in this world that are dependent in more complex ways. Are you familiar with the Anscombe's quartet, by the way?
In that case, I'll repeat my earlier question:
There is no general answer -- this question goes to why do you consider a particular data point to be evidence suitable for updating your prior. Ideally you have causal (structural) knowledge about the relationship between A & B, but lacking that you probably should have some model (implicit or explicit) about that relationship. The relationship does not have to be linear and does not have to show up as correlation (though it, of course, might).

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

--Marcel Proust

If everyone is thinking alike, someone isn't thinking.

George S. Patton

Ideally, everyone should be thinking alike. How about

I think the intended meaning (phrased in LessWrong terminology) is something more along the lines of the following:

Humans are not perfect Bayesians, and even if they were, they don't start from the same priors and encounter the same evidence. Therefore, Aumann's Agreement Theorem does not hold for human beings; thus, if a large number of human beings is observed to agree on the truth of a proposition, you should be suspicious. It's far more likely that they are signalling tribal agreement or, worse yet, accepting the proposition without thinking it through for themselves, than that they have each individually thought it through and independently reached identical conclusions. In general, then, civilized disagreement is a strong indicator of a healthy rationalist community; look at how often people disagree with each other on LW, for example. If everyone on LW was chanting, "Yes, Many Worlds is true, you should prefer torture to dust specks, mainstream philosophy is worthless," then that would be worrying, even if it is true. (I am not claiming that it is, nor am I claiming that it is not; such topics are, I feel, beyond the scope of this discussion and were brought up purely as examples.)

deally, everyone should be thinking alike.

Why? Thinking is not limited to answering well-defined questions about empirical reality.

As a practical matter, I think lack of diversity in thinking is a bigger problem than too much diversity.

Lack of diversity may be a problem because then you've got a lower chance of getting the right answer somewhere in there. It doesn't mean that everyone is thinking correctly. Do you subscribe to truth relativism? Otherwise, what could be thought about that doesn't have a correct answer?
If everyone is thinking in the same way, you have a good result only if that one way is the correct way. If there are a variety of different ways, all of which appear good, they will produce varying proposals which can be considered for their direct practical consequences, and when the different methods come into conflict, all can be tested and potentially improved. You might object that you have deduced the correct way of thinking, and therefore you do not need to be concerned with this. Two counter-arguments: 1) You are most likely overconfident, and the consequences of overconfidently removing all other methods of thinking are likely to be a catastrophic Black Swan when an unseen flaw hits you. 2) To the best of our knowledge, the objectively correct way of thinking is AIXI, which is literally unimplementable.
That, too, but there are other issues as well -- e.g. risk management through diversification. Is she pretty? Should I be a vegetarian? What's the best way of tackling that problem?
A disagreement on any of those questions reduces to either incorrect reasoning or differing preferences. People having identical preferences may be uncommon, but I don't think you can say it means someone isn't thinking.
The issue discussed isn't whether it is a problem that some people might think (or prefer) alike. The issue is (emphasis mine):
Risk management through diversification is a totally different use of the word diversification, and that can be followed by a single person also; I don't have to have two contradictory opinions to not invest all my money/resources/time in one basket. Of the 3 examples you mentioned: 1 is not something people actively "think" about, but is in a sense "automatic", although there is disagreement. If you feel 2 doesn't have a correct answer, then it seems you're endorsing some form of moral nihilism, in which case the question is meaningless. (Note: this is the position which I myself hold.) For 3, people are not actually looking for the "best" answer; they want a satisfactory answer. There is a best answer, but it's usually not worth the effort to find. (For any sufficiently complicated problem, of course.) There may be multiple satisfactory answers, but it's not a sign that someone isn't thinking if everyone comes up with the same satisfactory answer.
Totally different than what? Sure they do, but to make it more stark let me change it a bit: "Am I pretty?" More generally, this example represents the whole class of subjective opinions. Not quite, just rejecting moral realism is quite sufficient here. But in any case, people do think about it, in different ways, and I don't know how would one determine what is a "correct" answer. This example represents the distinction between descriptive and normative. Also, not quite. People do want the best answer, it's just that they are often satisfied with a good-enough answer. However the question of what is "best" is a hard one and in many cases there is no single correct answer -- the optimality is conditional on certain parameters. This example represents the individuality of many "correct" answers -- what is correct depends on the person.
We were talking about diversity of opinions, and you switched to talking about diversity for risk management. Also, if you don't know how to determine a correct answer, there's not much to think about until you do.
Diversity of opinions is helpful for risk management, specifically the risk that you commit all your resources to the single idea that turns out to be wrong. This is commonly known as "don't put all your eggs into one basket". Risk management is not only about money. I strongly disagree. In fact, figuring out how would you recognize a correct answer if it happens to bite you on the ass is the major thing to think about for many problems.
The benefit I mentioned above of diversity (higher chance of getting the right answer) is the same thing as what you're talking about then, not like you said :"That, too, but there are other issues as well". If you can recognize the correct answer when you see it, then the use of diversity is to increase your chances of getting the right answer. So are we down to the only correct use of the original quote is when people aren't sure how to recognize a correct answer?
Nope. The thing is, it's not a "correct" -- "not correct" binary dilemma. For any reasonably complex problem there might be a single "correct" answer (provided what you consider optimal is fully specified) and a lot of different "technically not correct" answers. Those "technically not correct" answers are all different and will rise to different consequences. They are not the same -- and if getting the "technically correct" answer is unlikely, you do care about which "incorrect" answers you'll end up using. Basically, diversification helps with dealing with the risks of having to accept "technically not correct" answers because the technically correct one is out of reach.
Twenty art students are drawing the same life model. They are all thinking about the task; they will produce twenty different drawings. In what world would it be ideal for them to produce identical drawings? Twenty animators apply for the same job at Pixar. They put a great deal of thought into their applications, and submit twenty different demo reels. In what world would it be ideal for them to produce identical demo reels? Twenty designers compete to design the new logo for a company. In what world would it be ideal for them to come up with identical logos? Twenty would-be startup founders come up with ideas for new products. In what world would it be ideal for them to come up with the same idea? Twenty students take the same exam. In what world would it be ideal for them to give the same answers? Twenty people thinking alike lynch an innocent man. Does this happen in an ideal world?
In 1 and 2, the thinking is not the type being referred to in the quote. In 3, assuming only one of theirs get chosen, then there are 19 failures, hence 19 non-thinkers or non-sufficient thinking. In 4, they're not all trying to answer the same question "what's the best way to make money", but the question "what's a good way to make money". (That may also apply to 3.) I touched on the difference in another thread. In 5, yes, every test-taker should give the correct answer to every question. Obvious for multiple choice tests, and even other tests usually only have one really correct answer, even if there may be more than one way to phrase it. In 6, first of all, your example is isomorphic to its complement; where 20 people decide not to lynch an innocent man. If you defend the original quote, then some of them must not be thinking. And the actual answer is that my quoted version is one-sided; agreement doesn't imply idealism, idealism implies agreement. I could add a disclaimer; everyone should be thinking alike in cases referred to by the first quote. I don't have a good way to narrow down exactly what that is off-hand right now, it's kind of intuitional. Do you have an example where my claim conflicts directly with what the first quote would say, and you think it's obvious in that scenario that they are right and not me?
You are invited by a friend to what he calls a "cool organization". You walk into the building, and are promptly greeted by around twenty different people, all using variations on the same welcome phrase. You ask what the main point of the organization is, and several different people chime in at the same time, all answering, "Politics." You ask what kind of politics. Every single one of them proceeds to endorse the idea that abortion is unconditionally bad. Now feeling rather creeped out, you ask them for their reasoning. Several of them give answers, but all of those answers are variations of the same argument, and the way in which they say it gives you the feeling as though they are reciting this argument from memory. Would you be inclined to stay at this "cool organization" a moment longer than you have to?
Now substitute "abortion is unconditionally bad" with "creationism should not be taught as science in public schools". If you would still be creeped out by that, then your creep detector is miscalibrated; that would mean nobody can have an organization dedicated to a cause without creeping you out. If you would not be creeped out by that, then your initial reaction to the abortion example was probably being mindkilled by abortion, not being creeped out by the fact that a lot of people agreed on something.
Just because I agree with their ideas doesn't mean I won't find it creepy. A cult is a cult, regardless of what it promotes. If I wanted to join an anti-creationist community, I certainly wouldn't join that one, and there are plenty such communities that manage to get their message across without coming off as cultish.
The example is supposed to sound cultist because the people think alike. But I have a hard time seeing how a non-cultist anti-creationist group would produce different arguments against creationism. The non-cultist group could of course not all use the same welcome phrase, but that's not really the heart of what the example is supposed to illustrate,
There are multiple anti-creationist arguments out there, so if they all immediately jump to the same one, I'd be suspicious. But even beyond that, it's natural for humans to disagree about stuff, because we're not perfect Bayesians. If you see a bunch of humans agreeing completely, you should immediately think "cult", or at the very least "these people don't think for themselves". (I'd be much less suspicious if we replace humans with Bayesian superintelligences, however, because those actually follow Aumann's Agreement Theorem.)
Yes, actually, and I don't see why it is creepy despite your repeated assertions that it is. And if they gave completely different arguments, you'd complain about the remarkable co-incidence that all these arguments suggest the same policy.
Difference of opinion, then. I would find it creepy as all hell. I probably would, yes, but I would still prefer that world to the one in which they gave only one argument.
Now you're just arguing from creepiness. Just because people should reach the same conclusions does not imply they should always do the same thing; e.g. some versions of chicken have the optimal solution where both players have the same options but they should do different things. (On a one-off with binding preconditions (or TDT done right), where the sum of outcomes on their doing different things is higher than any symmetrical outcome, they should commit to choose randomly in coordination.) This example looks similiar to me; the cool cultists don't know how to assign turns. Even if I had several clones, we wouldn't all be doing the same things; not because we would disagree on what was important, but because it's unnecessary to do some things more than once. Also, this organization sounds really cool! Where can I join? (Seriously, I've never been in a cult before and would love to have the experience.)
You really don't want that. ---------------------------------------- edit: A concrete useful suggestion is to reorganize your life in such a way that you have better things to do with your time than be a tourist in other people's misery and ruin.
Are you speaking from experience or general knowledge? If I go in knowing it's a cult, doesn't that change a lot? I'd be interested in a comparison of survival rates (of general sanity) between people depending on their mindset upon joining
If you join a cult, then even your physical survival will suddenly become a lot more perilous. You will likely have to conform, or die. Keep that in mind.
...Not that I know much about cults or their relationship to the law, but that seems kind of illegal.
The main problem with joining a cult isn't physical danger, or even the chance of having your mind permanently changed (retention rates for cult membership are very low). It's what they'll get out of you while you're in there. In most cases you can expect to see a lot of pressure to do things like handing over large sums of money, or donating large amounts of unpaid labor, or abandoning social links outside the organization, and those aren't necessarily things you can get back once you've burned them. I'd expect going in with eyes open to mitigate this to some extent, but not perfectly.
The quote is without a provenance that I can discover. If authentic, I presume that Patton was referring to military planning. I don't see a line separating that type of thinking from cases (1)-(4) and some of (5). Ideas must be found or created to achieve results that are not totally ordered. Thinking better is helpful but thinking alike is not. Only if you "thinking better" to retroactively mean "won". But that is not what the word "thinking" means. I doubt any of those entrepreneurs are indifferent between a given level of success and 10 times that level. Perhaps you are thinking only of a limited type of exam. There is only one correct answer to "what is 23 times 87?"[1] Not all exams are like that. Philosophy: Ancient history (from here: The link also provides the marking criteria for the question. The ideal result can only be described as "twenty students giving the same answer" if, as in case (3), "the same answer" is redefined to mean "anything that gets top marks", in which case it becomes tautological. I reject both of those. Agreement doesn't imply ideal, of course (case 6 was just a test to see if people were thinking). But neither does ideal imply agreement, except by definitional shenanigans. And your version of Patton's quote doesn't include the hypothesis of ideality anyway. Neither does Patton's. We are, or should be, talking about the real world. What are those cases? Military planning, I am assuming, on the basis of who Patton was. Twenty generals gather to decide how to address the present juncture of a war. All will have ideas; these ideas will not all be the same. They will bring different backgrounds of knowledge and experience to the matter. In that situation, if they all all agree at once on what to do, I believe Patton's version applies. (1) Ubj znal crbcyr'f svefg gubhtug ba ernqvat gung jnf "nun, urknqrpvzny!" Whfg...qba'g.
Humans have bounded rationality, different available data sets, and different sets of accumulated experience (which is freqently labeled as part of intuition).

Truth lies within a little and certain compass, but error is immense.

Henry St John, Viscount Bolingbroke, Reflections on Exile

Cf. Tolstoy: all happy families are alike, but every unhappy family is unhappy in its own way.

What happens twice probably happens more than twice: are there other notable expressions of this idea?

(There's a well-known principle in software development that's pretty close, though I can't find a Famous Quotation of it right now: when you're choosing a name for a variable or function or whatever, avoid abbreviations: there's only one way to spell a word right, and lots of ways to spell it wrong. Though this is not always good advice.)

Biblical verse on the asymmetry of error: "Enter through the narrow gate. For wide is the gate and broad is the road that leads to destruction, and many enter through it."
That's an interesting comparison. I always took the broad/narrow contrast to be about how easy each path is, and about how many take them, rather than how varied each is, but clearly the ideas are related.
... Usually agreed, on both counts. But: color/colour (and other US/UK pairs...)
True enough. But then there are even more ways to spell it wrong, and the general principle still holds. (With a possible exception for cases where you abbreviate a word in such a way as to remove the bits whose spelling differs. But, e.g. "col" is seldom likely to be a good abbreviation for "colo[u]r", not least because "column" will be a distracting other meaning...)

"Hah! Please. Find me a more universally rewarded quality than hubris. Go on, I'll wait. The word is just ancient Greek for 'uppity,' as far as I'm concerned. Hubris isn't something that destroys you, it's something you are punished for. By the gods! Well, I've never met a god, just powerful human beings with a lot to gain by keeping people scared."

-- Lisa Bradley, a character in Brennan Lee Mulligan & Molly Ostertag's Strong Female Protagonist

Or by physics. Not all consequences for overconfidence are social.
I'm not sure this is very rational. Assuming that you are more competent than you really are -- which seems to be a matter of hubris -- is indeed capable of destroying you.
Yes, but more favorable outcomes are also possible, like becoming the [e.g. 43rd] President.
I think the way it works is that people are built to have hubris for signalling purposes, and then they're built to be lazy and risk-averse to counter the dangers of hubris. If you don't get rid of risk-aversion and akrasia but you do get rid of hubris, that can be problematic.

"You should never bet against anything in science at odds of more than about 10^12 to 1 against."

  • Ernest Rutherford
Alas, as nice a quote as it is, it seems to be bogus: * previous discussion: * WP discussion to which I've added my failed attempts to find the original:
The neutrino anomaly was about 5*10^6 to 1 against. Not quite 10^12 to 1, but I still think it shows that odds that small aren't what they're cracked up to be.

Our human tendency is to disguise all evidence of the reality that most frustrates us: death. We need only look at the cemeteries, the gravestones. the monuments to understand the ways in which we seek to embellish our mortality and banish from our minds this ultimate failure of our humanity. Sometimes we even resort to “canonizing” our dead. After Saint Peter’s Square, the place where most people are canonized is at wakes: usually the dead person is described as a “saint.” Of course, he was a saint because now he can’t bother us! These are just ways of camouflaging the failure that is death.

-- Pope Francis, Open Mind, Faithful Heart: Reflections on Following Jesus

I think it's worth clarifying that Pope Francis and Jorge Mario Bergoglio are one and the same person.
Lesson learned: do not just copy-paste from Amazon.
I assume this is a pro-cryonics quote, but I don't quite see how it relates to rationality. Its point is, quite clearly, "Accept Jesus as your personal savior and gain the gift of Eternal Life".

It seems to me it's anti-death rather than pro-cryonics; the two aren't quite the same, and in particular being anti-death no more implies being pro-cryonics than it implies being pro-Jesus. And while no doubt Bergoglio's (= Pope Francis's) anti-death-ism is tightly tied up with his pro-Jesus-ism, what he's written here can stand on its own as an expression of an anti-death attitude.

(I'm not sure being strongly opposed to death should really qualify something as a Rationality Quote either, but that's a different complaint from "it's really all about Jesus".)


...human brains do many absurd things while failing to do many sensible things. Our purpose in developing a formal theory of inference is not to imitate them, but to correct them.

E. T. Jaynes, Probability: The Logic of Science

But, as compiler optimizations exploit increasingly recondite properties of the programming language definition, we find ourselves having to program as if the compiler were our ex-wife’s or ex-husband’s divorce lawyer, lest it introduce security bugs into our kernels, as happened with FreeBSD a couple of years back with a function erroneously annotated as noreturn, and as is happening now with bounds checks depending on signed overflow behavior.

Hacker new comment

We're similarly shocked whenever authority figures who are supposed to know what they're doing make it plain that they don't, President Obama's healthcare launch being probably the most serious recent example. We shouldn't really be shocked, though. Because all these stories illustrate one of the most fundamental yet still under-appreciated truths of human existence, which is this: everyone is totally just winging it, all the time.

Institutions – from national newspapers to governments and politicial parties – invest an enormous amount of money and effort in denying this truth. The facades they maintain are crucial to their authority, and thus to their legitimacy and continued survival. We need them to appear ultra-competent, too, because we derive much psychological security from the belief that somewhere, in the highest echelons of society, there are some near-infallible adults in charge.

In fact, though, everyone is totally just winging it.

-- Oliver Burkeman, The Guardian, May 21, 2014

I enjoyed this quote, and have had a great number of self depreciating laughs with other young professionals about how we were totally winging it. But it is not true. There are those winging it, but they are faking it until they make it, and make up a smaller group than represented above. The much larger group is made from a rainbow of wrong! Biases, ignorance, bad information, misinformation, conflicting agendas, the list goes on. The group of people just winging it, pushing their limits, faking it until they make it, are only piece of the bigger picture of stuff done wrong. It is not fair to overrepresent their influence. Although, it is always a comfort to know there are others out their in the same boat, just winging it.

It is, of course, worrying in itself that there's an open question about whether an extortionist attack via malicious software on a huge company has been conducted by a nation-state, an organised crime group, or a bored teenager.

AlyssaRowan On Hacker News

Schneier on Security blog post

Which [sports] teams win is largely a function of which teams have the best players, and each league has its own way of determining which players end up on which teams. So, in a sense, Team 1 vs. Team 2 is no more a contest of athletic prowess than chess is a test of whether queens are more powerful than bishops. The real battle is between groups of executives, and the sport is player acquisition.

-- Adam Cadre

This seems like explaining vs. explaining away. The process by which better players pick up wins is by winning the "contest of athletic prowess." The game itself is interesting to watch because we like to see competent people play, and when upsets happen, they often happen for reasons that are easily displayed and engaged with in terms of the mechanics of the game.
This is similar to choosing strict determinism over compatibilism. Which players are the "best" depends on each of those players' individual efforts during the game. You could extend the idea to the executives too, anyway--which groups of executives acquire better players is largely a function of which have the best executives. Efforts are only one variable here, and the quote did say "largely a function of". Those being said, look at how often teams replay each other during a season with a different winner.

We so often confuse “what can be translated into print well” with “what is important and interesting.”

Tyler Cowen

We also confuse "what is important" with "what is interesting" fairly often.

Even though you read much Zen literature, you must read each sentence with a fresh mind. You should not say, "I know what Zen is," or "I have attained enlightenment." This is also the real secret of the arts: always be a beginner.

--Shunryu Suzuki

I think this is a very important sentiment. I'm however not sure how to get others to adopt it.
It's the wisdom that comes with age. Doctors call it Alzheimer's. :-D
Are you saying that because you don't understand the point that the orginal quote wants to make, or are you using it to try to make a unrelated joke?
I'm using it to make a related joke.
Alzheimer's first attacks short term memory before long-term memory. It makes learning harder. It has little to do with being open to new learning.
The quote doesn't talk about easier learning. Alzheimer's makes it easier to approach problems as "a beginner", with "a fresh mind" :-P
Tough crowd. Or, in ChristianKI's case, tough Kraut. Since IIRC he's a Berliner (an actual one, not like JFK).

"As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation—or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind’s wings should have grown."

Ayn Rand

If the bolded pair of words were struck, I'd agree completely. Different people will have different balls and chains.
This quote was from a speech given to West Point cadets. By no means are they identical but it would be relatively hard to find a group of people more identical (from the perspective of being of the same gender, same age (within a few years) same nationality, and same general ideology).
A. False dichotomy - there are other choices. We might choose to compartmentalize our rationality, for example. B. False dichotomy in a different sense - we actually don't have access to this choice. No matter how hard we work, our brains are going to be biased and our philosophies are going to be sloppy. It's a question of making one's brain marginally more organized or less disorganized, not of jumping from insanity onto reason. I'm suspicious that working with the insanity and trying to guide its flow is a better strategy than trying to destroy it. C. Although not having a philosophy leaves us open to bias, having a philosophy can sometimes expose us to bias even further. It's about comparative advantage. Agnosticism has wiggle room that sometimes can be a place for bias to hide, but conversely ideology without self-doubt often serves to crush the truth.
A. How would you implement that choice? B. We is a loaded term, speak for yourself. There's benefit to realizing that as a human you have bias. There's no benefit to declaring that you can't overcome some of this bias. C Wouldn't that depend on your philosophy?
C. Yes. B. Agreed that there's benefit to realizing we have bias, disagree that there's no benefit to declaring some biases aren't overcomeable. Trying to overcome biases takes effort. Wasted effort is bad. It's better to pursue mixed strategies that aim at instrumental rationality than to aim at the perfection described in the Rand quotation. Thoughts that seem complex or messy should not be something we shy away from, reality is complicated and our brains are imperfect. A. I don't know how to describe how to do it, but I do it all the time. It's something humans have to fight against to avoid doing, as it's essentially automatic under normal conditions.
I think you are assuming hyperbolic discounting/short time preference. It requires a lot of effort to overcome bias, perhaps years. But there are times when it is worth it. What perfection? Choosing philosophy? You can always update your philosophy.
There are also times when it's not worth it, in my opinion. Rand contrasts "a conscious, rational, disciplined process of thought and scrupulously logical deliberation" with "a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain". I think it's possible to avoid becoming such a disgrace without scrupulously logical deliberation. Most people are severely biased but are not as unhappy or helpless as Rand's argument would imply. Trimming the excesses of our biases seems more reasonable than eliminating them, to me.
You lost me at "junk heap." There is no conscious choice available to a layperson ignorant of philosophy and logic, and such ways of life are perfectly copacetic with small-enough communities. If anything, it is the careful thinker who is more shackled by self-doubt, better understood as the Dunning-Kruger effect, but Ayn Rand has made it obvious she never picked up any primary literature on cognitive science so it's not surprising to see her confusion here. Quote from 1971's The Romantic Manifesto.
Sorry you're so averse to negative descriptions of the average person's philosophy. Yes there is, they can choose what music, TV, movies, videos etc to buy/view/play. Do you mean communities where the leader knows about philosophy and can order people around? It's reasonable to doubt certain things, but if learning increases your self doubt than you're doing it wrong. She was associated with Nathaniel Branden, a well regarded psychologist. Cognitive Science is a relatively new field. I don't think she's confused, she's saying something you disagree with. If you think you've refuted it, I think you're the confused one.
1. False dichotomy - there are other choices than those. We might choose to compartmentalize our rationality, for example. 2. False dichotomy in the other direction - we don't have access to this choice. No matter how hard we work, our brains are going to be biased and our philosophies are going to be sloppy. It's a question of making one's brain marginally more organized or less disorganized, not of jumping from insanity onto reason.

We have remarked that one reason offered for being a progressive is that things naturally tend to grow better. But the only real reason for being a progressive is that things naturally tend to grow worse. The corruption in things is not only the best argument for being progressive; it is also the only argument against being conservative. The conservative theory would really be quite sweeping and unanswerable if it were not for this one fact. But all conservatism is based upon the idea that if you leave things alone you leave them as they are. But you do n

... (read more)

I am reminded of:

"Arf arf arf! Not because arf arf! But exactly because arf NOT arf!" GK Chesterton's dog


In trying to find the above quote by wildcard searching on Google, I stumbled upon another quote of this nature by the dog's owner himself: "I want to love my neighbour not because he is I, but precisely because he is not I." There appears to be another one about science being bad not because it encourages doubt, but because it encourages credulity, but I'm unable to find the exact quote.

Who could have imagined that Zizek was so derivative! Oh wait...
Zizek himself lampshades the method here.
As does Chesterton, less explicitly: and at length. I get the impression that he (thankfully!) eased off on that particular template as time went on.
I'm inclined to think that non-ideological autocracy (we're in charge because we're us and you're you) is the human default. Anything better or worse takes work to maintain.
I'm not sure about that. In fact, I can't think of any actually non-ideologically autocratic society in history. Are you sure you're not confusing "non-ideological" with "having an ideology I don't find at all convincing"?
I seem to remember reading that tribes were more egalitarian than modern society, although its possible the author was just romanticising the noble savage.
There's reason to believe that foragers were more materially egalitarian than farmers, just because material wealth was harder to store. But it's not obvious that they were more egalitarian when it comes to political power or ability to do violence.
When the most powerful weapon is a mounted knight in full plate mail, its easy for a small minority to dominate. When the most powerful weapon is the pointed stick...
The medieval period is pretty late in the history of farming; I had in mind the early period of farming, when foraging and farming were more competitive. But I think this focuses too much on visible organized violence and not enough on total violence. Were forager men more or less likely to beat their wives than farmer men? Forager parents vs. farmer parents? It seems possible that a larger percentage of the male forager population had potential access to rape through raids than the percentage of the male farmer population that had potential access to rape through soldiering, but I would want a lot of anthropological data before I made that claim confidently, which is why I don't think it's obvious. This is a bit of a change in topic from the original comparison- tribal hunter-gatherers to modern society- but I think that the sorts of things people use violence and political power for are so different that they can't be compared that directly. As the saying goes, God created man but Sam Colt made them equal: in America it's not that uncommon for individual losers to shoot the most politically powerful man in the country, often leading to his death. I suspect the rate of losers in tribes murdering the local chief is much lower. But maybe what we want to compare is not 'ability to do violence' but 'ability to get away with doing violence,' but even then I don't think we have the data to make a good comparison. Was the ability of tribals to go on the run to escape vengeance better or worse than the ability of moderns? It seems like there are multiple dimensions with different directions for that comparison.
An interesting read, but I was not claiming that a more egalitarian distribution of physical power decreases violence - if anything, having one dominant power leads to peace because no-one challenges them, while as you say, the levelling power of firearms means that anyone can inflict violence. AFAIK many tribal societies were much more violent - I read somewhere that in some tribes the majority of adult male deaths were due to homicide.
Skill is an a large premium. Thus those who have the free time to practice can end up dominating.
Actually, one thing that I noticed while reading this book is that despite engaging in violence far more frequently than people in non-tribal cultures, the Yanomamo don't really seem to have a conception of martial arts or weapons skills, aside from skill with a bow. The takeaway I got was that in small tribal groups like the ones they live in, there isn't really the sort of labor differentiation necessary to support a warrior class. Rather, it seems that while all men are expected to be available for forays into violence, nobody seems to practice combat skills, except for archery which is also used for food acquisition. While many men were spoken of as being particularly dangerous, in all cases discussed in the book, it was because of their ferocity, physical strength, and quickness to resort to violence. In fact, some of the most common forms of violent confrontation within tribes are forms of "fighting" where the participants simply take turns hitting each other, without being allowed to attempt to defend or evade, in order to demonstrate who's physically tougher. I'm not sure how representative the Yanomamo are of small tribal societies as a whole, but it may be that serious differentiation of martial skill didn't come until later forms of societal organization.
This seems like Chesterton is making it up completely. Most progressives base the impulse on the hope that things could be better; dealing with the decay of conservatism is not a hypothesis that even enters in their minds. The 'truth of conservatism' (at least, the straw-conservatism defined by Chesterton here) is taken for granted by most people: if things keep on going like this, they'll keep on being like this. No one has ever become a feminist by saying 'my god! if we leave things alone, the patriarchy will keep becoming even more oppressive and brutal with each year! We need to fight this slide of the status quo, and incidentally, it would be nice if we could not just repair the rot but also yank the status quo towards feminism and get women the vote and stuff like that'. No, it tends to be more like 'the status quo is awful! Let's try to move it towards getting women the vote and stuff like that'.

I am an intransigent atheist, but not a militant one. This means that I am an uncompromising advocate of reason and that I am fighting for reason, not against religion. I must also mention that I do respect religion in its philosophical aspects, in the sense that it represents an early form of philosophy.

Ayn Rand, to a Catholic Priest.

Philosophers have played a game going way back where they believe that popular religion comes in handy as a fiction for keeping the mob in line, but they view themselves as god-optional. The philosophes in the Enlightenment started the experiment of letting the mob in on the truth, and the experiment has apparently gone so far in parts of Europe like Estonia that some populations have lost familiarity with christian beliefs, or even how to pronounce Jesus' name in their own language. Or so Phil Zuckerman claims:
The mob is pretty well educated these days, and the standard of living is so high that there's much less incentive to step out of line. I don't think we can compare modern nations to historical nations to make any claim about whether religion keeps people in line. The claim that people can't pronounce Jesus' name might apply to former Soviet Union countries, but I doubt it applies anywhere else in Europe.
Do you know that Jesus's actual name is Yeshua?
We don't know that. It was likely some variant of the name commonly translated as "Joshua" in English. It could have been Yeshua or Yehoshua or a variety of slightly Aramacized variants of that.
But English language's "Jesus" is still far off.
Sure, but I fail to see how that's relevant to the point in question.
"some populations have lost familiarity with christian beliefs, or even how to pronounce Jesus' name in their own language."

I've been killing characters my entire career, maybe I'm just a bloody minded bastard, I don't know, [but] when my characters are in danger, I want you to be afraid to turn the page (and to do that) you need to show right from the beginning that you're playing for keeps.

— George R. R. Martin, Wikiquote, audio interview source

(Changed from an earlier quote I decided I'd keep for later.)

Wow. I am, uh, embarrassed to say that I somehow managed to get caught up in the replies to this comment without ever actually seeing the quote itself until now. (In my defense, I did get here through the Recent Comments sidebar, but still... yeah, not one of my prouder moments.) So, now that I've finally gotten around to reading the quote, uh... ...Maybe I'm dense, but I'm not quite understanding this one. I mean, I understand that it's an explanation of Martin's philosophy of writing, but I'm not really seeing the rationality tie-in. I could probably shoehorn in an explanation for why and how it relates, but the problem with such an explanation is that it would be exactly that: shoehorned in. I feel as though advice of this sort would be much better suited to a writing thread than to a rationality quotes thread. Could someone explain this one to me? Thanks in advance.
Fair point. To be honest, I just got this quote from Martin's Wikiquote page after I decided to save the original and needed something to replace it. (I suppose I could've done something like change the whole post to "[DELETED]" and then retract it, but this seemed good enough at the time.) I can't really make a rigorous case for this quote's appropriateness here, what actually drove my decision to use this was basically a hunch. My after-the-fact rationalization is that maybe this quote sort of touched on the Beyond the Reach of God sense that death is allowed to happen to anyone, at any time, and especially in dangerous situations, as opposed to most fiction which would only allow the hero to die in some big heroic sacrifice?
For an after-the-fact rationalization, that's actually not bad. On the other hand, I think Martin might actually push it a little too far; reality isn't as pretty as most fiction writers make it out to be, true, but it isn't actively out to get you, either. The universe is just neutral. While it doesn't prevent people from suffering or dying, neither does it go out of its way to make sure they do. In ASoIaF, on the other hand, it's as though events are conspiring to screw everyone over, almost as if Martin is trying to show that he isn't like those other writers who are too "soft" on their characters. In doing so, however, I feel he fell into the opposite trap: that of making his world too hostile. Everything went wrong for the characters, which broke my suspension of disbelief every bit as badly as it would have if everything had gone right.
For me, it's not just a problem of suspension of disbelief, it's a problem of destroying involvement in the story. If too much bad happens to the characters, I'm less likely to be emotionally invested in them. Martin's "The Princess and the Queen" (a prequel to ASoIaF) in Dangerous Women is especially awful that way, through the characters aren't developed very much, either. I'm hoping he does a better job in the main series.
His reputation as a "bloody minded bastard" aside, Martin has creznaragyl xvyyrq bss n tenaq gbgny bs bar CBI punenpgre va gur ebhtuyl svir gubhfnaq phzhyngvir cntrf bs gur NFbVnS frevrf fb sne (abg pbhagvat cebybthr/rcvybthr punenpgref, jubz ab bar rkcrpgf gb fheivir sbe zber guna bar puncgre). Gur raqvat bs gur zbfg erprag obbx yrnirf bar CBI punenpgre'f sngr hapyrne, ohg gur infg znwbevgl bs gur snaqbz rkcrpgf uvz gb or onpx va fbzr sbez be nabgure. (Aba-CBI graq gb qebc yvxr syvrf, ohg gur nhqvrapr vf yrff nggnpurq gb gurz.)
Prediction: 30% chance it's a Christmas related quote.

Nope, just saving my first choice of quote for the beginning of the next thread. I figure if I post a good quote now, people will mostly only see it from the recent comment and recent quote feeds, and after a few others get posted, people will mostly forget about it and not, if they were to like it, upvote it. Whereas if it were one of the first posts in a thread, and people liked it and started upvoting it, it would stay high on the page and gather even more attention and upvotes, creating a positive feedback loop which would give me karma.

Machiavellian, isn't it? I doubt it'll work out that well, but I figure it's worth a shot.

^Everyone should upvote this in an ironic celebration of your honesty.
I think that we use "Best" (which is a complicated thing other than "absolute points") rather than "Top" (absolute points) precisely to reduce the effectiveness of that strategy.
That's interesting. What criterion/criteria does "Best" use, then? And on a different but related note: does it really negate the strategy? I note that, despite using the "Best" setting, this page still tends to display higher-karma comments near the top; furthermore, most of those high-karma comments seem to have been posted pretty early in the month. That suggests to me that Gondolinian's strategy may still have a shot.
Technical explanation Non-technical explanation
All right, thanks. So, I gave both articles a read-through, and I think that as described, the system implemented won't necessarily negate the strategy (though it may somewhat reduce said strategy's effectiveness). Really, it all depends on how awesome Gondolinian's quote is; if it's awesome enough to get a rating that's 100% positive, then the display order will be organized by confidence level, which in practice just means a greater number of votes most of the time (more votes → less uncertainty), which in turn means it'll need to be posted earlier, which brings us back to the original situation, blah blah blah etc. (A single downvote, however, would be sufficient to screw up the entire affair, so there's that.) I guess that's why you originally said it would only reduce the strategy's effectiveness, not eliminate it entirely. That's awesome. My metaphorical hat is off to Gondolinian for figuring out a way to game the system--and crucially, take the second step: countering akrasia and actually doing it. Instrumental rationality at its finest.
Don't bet on it. :)

"There is no such thing as uncharted waters. You may not have the chart on hand to show you how to navigate these waters, but the charts exist. Google them."

Joe Queenan, WSJ 11/30/14

Too strong to be literally true but still

Think it's false, both literally and figuratively. Moreover, the guy needs to get out of his cubicle and go to interesting places :-)
As far as literal charts of literal bodies of water on the surface of the earth, satelite photography actually has pretty much solved that problem. As far as metaphorical waters, human civilization is larger than most people really think, and consists disproportionately of people finding and publishing answers to interesting questions. "Don't assume the waters are uncharted until you've done at least a cursory search for the charts" is sound advice.
Ahem. Do you really think that a picture of water surface which looks pretty much the same anywhere is equivalent to a nautical chart? Proper nautical charts are very information-dense (take a look) and some of the more important bits refer to things underwater.
I'm fully aware that there's more to nautical charts than the water's surface, and I used the term 'satellite photography' somewhat broadly. More of the deep ocean has been mapped by sensors in polar orbits, which can stay on-station indefinitely and cover the entire globe without regard for local obstacles, than ever was (or likely would have been) by surface craft and submarines.
To quote the National Ocean Service: In general water is an obstacle for satellites mapping deep ocean ground.
the “95 percent unexplored” meme doesn’t really tell the full story of our exploration of the oceans.
95% of the people, 95% of the time is a less good standard when dealing with interesting people, isn't it ;) EDIT: Downvote for... accepting a different opinion? Duly noted; will do so more quietly in future.
There's a law about that :-P

Carthage must be saved.

Publius Cornelius Scipio Nasica Corculum

Since you're probably aware that one Roman senator (Cato) ended his speeches with "Carthage must be destroyed," you should also know that another responded with the opposite.

How is this a rationality quote?
Accurate beliefs, efficient altruism, and giving historical credit to the good guys. What does it say about us that (I would guess) most well educated westerners know about the "Carthage must be destroyed" quote but not the "Carthage must be saved" one?

What does it say about us that (I would guess) most well educated westerners know about the "Carthage must be destroyed" quote but not the "Carthage must be saved" one?

It says that we care about the real as opposed to the imaginary. That is entirely to our credit.

Regardless of what may be considered moral, Carthage was destroyed. Educated people who wish to understand ancient history therefore naturally wish to learn of Cato's anti-Carthaginian campaign, precisely because it was successful. In addition, Cato the Elder was considered a model of behaviour by subsequent generations of Romans, in a way that Corculum was not, therefore to understand ancient Rome we have to understand the behaviour they valourised.

Similarly, Fumimaro Konoe is not nearly as famous as Hideki Tojo. This is not because educated Westerners favour Tojo's foreign policy, but because Tojo won the debate and Japan went to war.

While I agree with the overall sentiment, I think it's important not to overdo this approach. Let me explain.

Consider the situation where you have a stochastic process which generates values -- for example, you're drawing random values from a certain distribution. So you draw a number and let's say it is 17.

On the one hand you did draw 17 -- that number is "real" and the rest of the distribution which didn't get realized is only "imaginary". You should care about that 17 and not about what did not happen.

On the other hand, if we're interested not just in a single sample, but in the whole process and the distribution underlying it, that number 17 is almost irrelevant. We want to understand the entire distribution and that involves parts which did not get realized but had potential to be realized. We care about them because they inform our understanding of what might happen if the process runs again and generates another value.

Similarly, if you treat history as a sequence of one-off events, you should pay attention only to what actually happened and ignore what did not. But if you want to see history as a set of long-term processes which generate many events, you'... (read more)

You make a good point.
Good point.
Why is Publius Scipio Nasica a "good guy"? His opposition to Carthage's destruction was based on his idea that without a strong external enemy Rome will descend into decadence. (see Plutarch). That, to me, tentatively places him into the "pain builds character so I will make sure you will have lots of pain" camp which is not quite the good guys camp.

Why is Publius Scipio Nasica a "good guy"? His opposition to Carthage's destruction was based on his idea that without a strong external enemy Rome will descend into decadence.

Well, it did.

That's an awesome response.
Forgive my fulfilling of Godwin's Law, but if a Nazi leader repeatedly told Hitler "Don't kill the Jews because struggling against them in the economic marketplace will make Germans stronger" would you consider this leader a "good guy"?
No, I would not. And the equivalent position, actually, would be "Do not kill all the Jews at once, keep on killing them for a long time because the struggle will keep the Germans morally pure". The intent matters.
Okay, what does it have to do with efficient altruism?
It's an example of someone speaking out against genocide. The effort ultimately failed, but engaging in political advocacy against mass murder could reasonably be considered efficient altruism?
Arguable, but let's suppose it can. So, you gave an example of efficient altruism failing. Did you mean it as contra-efficient altruism quote?
I meant it as having a high positive expected value, not a counter-example.
Unfortunately, it ended up being a counterexample. Downvote.

Recently I was with a group of mathematicians and philosophers. One philosopher asked me whether I believed man was a machine. I replied, “Do you really think it makes any difference?” He most earnestly replied, “Of course! To me it is the most important question in philosophy.”

...I imagine that if my friend finally came to the conclusion that he were a machine, he would be infinitely crestfallen. I think he would think: “My God! How horrible! I am only a machine!” But if I should find out I were a machine, my attitude would be totally different. I would

... (read more)
[This comment is no longer endorsed by its author]Reply
Whoops! Thank you.

After reading Contrafactus, a friend said to me: "My uncle was almost President of the U.S.!"

"Really?" I said.

"Sure," he replied, "he was skipper of the PT 108." (John F. Kennedy was skipper of the PT 109).

-- Douglas Hofstadter, Godel, Escher, Bach

It is a good quote in general, but not quite a rationality quote.
I thought it was a nice illustration of the distinction between map and territory, or between different maps of the same territory. In other words, JFK and the speaker's uncle were very close together by a certain map, but that doesn't mean they were very similar in real life.

At this point it should become apparent that I do not think that theorems are really proved. As G. H. Hardy said long ago, we emit some symbols, another person reads them, and they are either convinced or not by them. To simple people who believe whatever they read and do not question things for themselves, a proof is a proof is a proof, but to others a proof merely supplies a way of thinking about the theorem, and it is up to the individual to form an opinion. Formal proofs, where there is deliberately no meaning, can convince only formalists, and of the

... (read more)
It's actually called Mathematics on a Distant Planet.
Thanks! I've made the change.
I agree with the quote, but don't really see any point or importance to it.

“The birthrate in the United States is at an all-time low. Whereas our death rate is still holding strong at 100 percent.”

Jimmy Kimmel

It's actually only about 45 percent. The death rate for the world as a whole is about 93 percent.
That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.
That depends on whether fetuses are people ... If yes, the actual birth rate is around 80%.
I wouldn't consider abortion a "birth", per se.
Exactly, so only people who aren't aborted count as born, in which case the birth rate is 80%.
Ah, "actual" threw me off. So you mean something close to "The lifetime projected probability of being born(/dying) for people who came into existence during the last year".

You got it in one, and non-Martians don't get it. I tell them, you're just used to it here on Earth, and out there it's simple. Out there, no scissors come between word and deed. Out there, word and deed is one, like time and space. You said you'd do - you do...

  • Boris Shtern, A Dinosaur's Notes. Translation mine.
I think are generally preferred on Rationality Quotes threads. You can make blockquotes by typing a greater-than symbol (>) followed by a space before each paragraph in your quote (no need for quotation marks). Also, that specific quote doesn't make much sense to me without context.
After a joint American-Soviet mission to Mars, the astronauts return home and refuse to tell who was the first to put their feet on the planet. Everybody pesters them, but they say they did it together (though they really couldn't.) The Soviet one is drinking with a new friend, whom he knows for a few hours, and the friend says it is impossible that Harrison will claim the honour - and so gets dubbed 'a Martian' himself. Martianss here is really a name for humans for whom petty things don't matter, who work for mankind.
This is off on a tangent, but why couldn't they? If they went through all the effort to make it a joint mission, stepping off of the ship at the same time, at least to the point where neither could tell who landed first, seems comparatively easy.
I wondered myself. Maybe they decided not to risk anything-just in case.
Supporters of the Soviets were keen on moral equivalency. Imagine if that was done with Nazis. "Petty things like the difference between people who burn others in ovens, and people who don't, don't matter".
I think that the quote means "petty things like who stepped out of the spaceship first don't matter", not "petty things like the difference between us and those capitalist pigs don't matter". It's also true that the line between "American" and "Soviet" (or, for that matter, between "American" and "1940s German") is not drawn in remotely the same way as the line between "burns others in ovens" and "doesn't": it is mainly indicative of which part of the world you were born in. I have much greater sympathy for moral equivalency in the first case than in the second.
The line between a random American and a random Soviet person depends mostly on what part of the world they were born in. A person who lands on Mars is not random; they couldn't get to Mars without enthusiastically participating in the system. The people who praise the astronauts are aware of this too, and will treat the astronauts' successes as a success of the system, not mainly as the success of an individual astronaut.
They both landed on Mars. Which one touched first is random. If it wasn't, it would be signalling that one country is better, which is the exact opposite of the point of a joint mission to Mars. It's to show the two countries respect each other as equals. Getting to Mars is just a bonus.
I find it hard to think of someone who "enthusiastically participates in the system" in order to go to space as being morally culpable for everything that the system has done. It's not quite a matter of choosing between participating in the system or being punished by the system. It's possible to live an inconspicuous life with only mild risk of suffering the consequences of no enthusiastic participation. But this is incompatible with accomplishing something noteworthy. I can admire someone who has the ambition of going to space, but denies that ambition on moral grounds because it would support a political faction. However, I think a moral framework that demands this is unreasonably strict.
I'm not holding the astronaut responsible for anything. It's the reverse: because the astronaut had to work within the system to succeed, his success is not his personal success, it's the system's success. Saying "it doesn't matter which astronaut won" is saying "it doesn't matter which system won". When one system starved up to 7.5 million people to death and another didn't, which system won is not a petty issue. (You could, however, argue that "first man on Mars" and "second man on Mars" are very similar achievements and that one is so marginally close to the other the difference between the two is petty. But I don't think that's what most people who express this kind of pettiness sentiment mean.)
I see your point; I think that saying "the system won", though, is an easy story to tell that doesn't reflect what actually happens very well. I don't see how the starving-people-to-death part of the system and the space-race part are sufficiently connected that the space-race part winning helps the starving-people-to-death part. (If you disagree about this prediction, I will be unhappy to discuss it further but happy to say "okay, this is the underlying fact on which we disagree, let's stop there". Is this the underlying fact on which we disagree, or is there more to it?) Thus, my understanding of the original quote is "The Pursuit of Science lies above political differences, and sabotaging the former because of the latter is petty."
Via propaganda. Specifically, in the form of "Yes, all y'all are starving and we had to shoot a few of your friends and relatives for not being enthusiastic enough, but look! We are actually achieving GREAT THINGS! Digging ditches in Siberian permafrost is part of the common effort which makes our society SUCCESSFUL and we can prove that it is successful because we just WON THE SPACE RACE!". I think that the Soviet Union actually got a lot of propaganda mileage out of Sputnik and Gagarin in real life. And that is, of course, ignoring the other part -- that space rockets with minor modifications function perfectly well as ICBMs...
Also, there's a more direct connection: They both involve the government deciding to allocate resources. In the case of the space race, the government allocates resources to something; in the case of the starving Ukrainians, the government takes resources away from someone. But they're flip sides of the same process, which is a top-down dictatorship using ideology to decide who gets to have the resources.
All governments are in the business of allocating resources, both directly (US government spending is about one third of GDP) and indirectly through laws and regulations.
Try replacing "starving people to death" with "putting people in ovens".
That particular person didn't care for the system. He was the Editor-in-chief of a (fictional) journal 'Science and thought', dedicated to protecting population from fraud and literally wasn't afraid of the devil. But he did care about space exploration. The quote was meant to express the muchsimpler message about selective pressure'out there' that makes ordinary oneupmanship as a habit of mind irrelevant.
Ok, how about the difference between "sends people to the gulags on trumped up charges" and "doesn't", or "engineers famines" and "doesn't"?
That's (approximately) the difference between Stalin and not Stalin. I'm pretty sure most Soviet astronauts had never engineered a single famine. The "participating in the system" argument given by Jiro is more reasonable, so see my cousin comment for my reply to that.
And most members of the Nazi rocket program never put anyone into an oven.
Yes, imagine. (Spoilers for "Worm".)
Do you have a summary? I don't want to bother reading that.
Summary: The superheroes of Worm regularly fight against existential threats called Endbringers, and have to work together with villains (some of whom are neo-nazis) to do it. They've been able to set up rules to ensure the villains can co-operate (no arrests, no using villains as bait, everyone gets medical attention afterwards), without which the Endbringers would win. However, the linked chapter explains that they've failed to extend this to post-fight celebrations, since the public won't accept any form of moral equivalence. Since the public will protest if villains are honoured for their sacrifices, and the villains riot if heroes are honoured but villains are not, no-one gets honoured.
I think "petty things don't matter" connotes that the differences are small on an absolute scale and that working together demonstrates this, not that the differences are merely small in relation to the goal on which everyone works together. The latter is honoring Nazis for their sacrifices; the former is saying "the fact that Nazis can sacrifice shows that it's not important to oppose Naziism".
If you were writing any story in which the protagonist works with Nazis or neo-Nazis, you'd want them to face a greater threat - perhaps an existential threat, like nuclear war in the time when the USSR existed. Otherwise you'd be writing a ridiculous straw-man. Interesting note for people who've read "Worm" - gur svefg Raqoevatre gb nccrne va gur jbeyq bs gur fgbel jnf enqvbnpgvir, gur frpbaq bprna-eryngrq, naq gur fpnel bar znxrf zr guvax bs NTV.
Offend with substance, don't offend with style. Fixing broken windows is useful even if you don't care about the actual window.
I find myself confused...: (
Formatting quotes properly isn't hard, there no good reason against it.
Better, but make sure you keep the stuff you don't want quoted on a different paragraph.

After that incident, my doctor and I had a long, spirited conversation about statistics and Bayesian analysis. And one reason he is no longer my doctor is that he displayed very poor judgment in handling the trade-off between false positives and false negatives. That test should never have been run, because it was vastly more likely to produce unnecessary emotional anguish (and health-care spending!) than useful information.

Megan McArdle

[This comment is no longer endorsed by its author]Reply

If the real radical finds that having long hair sets up psychological barriers to communication and organization, he cuts his hair.

Saul Alinsky, in his Rules for Radicals.

(This one hit home. :p)

[This comment is no longer endorsed by its author]Reply

If the real radical finds that having long hair sets up psychological barriers to communication and organization, he cuts his hair.

Saul Alinsky, in his Rules for Radicals.

[This comment is no longer endorsed by its author]Reply

All the human being need do is see what needs to be done, and do it.


(Is self-reference ok? This struck me.)

A quote from my son (just turned eleven years):

Me: "What is the meaning of life?"

He: "To live it."

This sounds trite but I think it is actually the correct (or most sensible) answer. I was kind of impressed. Maybe we should ask children more of these grande questions and gain factual answers instead of taking them as deeper as they are.

I prefer: Kurt Vonnegut, Breakfast Of Champions
Indeed, I suppose their worldview are much clearer and in some ways unbiased than ours. When child is born he sees the world as it is, not through many prisms including our subjective value judgements
2Capla9y Insightful? I give him credit for his epistemic humility, at least.

"Don’t let anybody discourage you or tell you that intelligence doesn’t pay or that success in life has to be achieved through dishonesty or through sheer blind luck. That is not true. Real success is never accidental and real happiness cannot be found except by the honest use of your intelligence."

Ayn Rand

Too strong.

Nobody EVER got successful from luck? Not even people born billionaires or royalty?

Nobody can EVER be happy without using intelligence? Only if you're using some definition of happiness that includes a term like "Philosophical fulfillment" or some such, which makes the issue tautological.


“Never confuse honor with stupidity!” ― R.A. Salvatore, The Crystal Shard