This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

  •  Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  •  Do not quote yourself.
  •  Do not quote comments/posts on LW/OB.
  •  No more than 5 quotes per person per monthly thread, please.
Rationality quotes: April 2010
New Comment
309 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]RobinZ420

I listen to all these complaints about rudeness and intemperateness, and the opinion that I come to is that there is no polite way of asking somebody: have you considered the possibility that your entire life has been devoted to a delusion? But that’s a good question to ask. Of course we should ask that question and of course it’s going to offend people. Tough.

Daniel Dennett, interview for TPM: The Philosopher's Magazine

5Rain
If the point is to get them to answer or reason about the topic, then I think we should reject the statement that "there is no polite way of asking." We should find a way of asking politely, such as teaching them to process our questions instead of answering with cached thoughts. Being offensive doesn't win. I also think it's a poorly phrased question, since it's easily brushed off with "yes/no", avoiding any of the deeper implications in an apparent effort to make it catchy and instantly polarizing. If the point is to upset people, to feel righteous, or to signal tribal affiliation, then go right ahead.
3RobinZ
This is not universally true, but I would support trying to create nonoffensive ways to deliver the message - the combination of direct and conciliatory methods is probably more powerful than either alone.
2Rain
Yes, I considered that to be the primary statement under contention. It's not a strategy I wish to use, so I decided to speak out against it even as I realize that's kind of the point, to have purists who can continue to show that there's further to go, and a spectrum of other positions to provide a more gradual path. I recognize the potential usefulness of it even as I deride it; I am good cop.

I recall, for example, suggesting to a regular loser at a weekly poker game that he keep a record of his winnings and losses. His response was that he used to do so but had given up because it proved to be unlucky. - Ken Binmore, Rational Decisions

A side note: All three of the quotes I've posted are from Binmore's Rational Decisions, which I'm about a third of the way through and have found very interesting. It makes a great companion to Less Wrong -- and it's also quite quotable in spots.

[-]gwern110

Wow - I think I felt real physical pain in my eyes as I read that one.

[-]RobinZ350

My dad used to have an expression: "Don't tell me what you value. Show me your budget, and I'll tell you what you value."

Joe Biden, remarks delivered in Saint Clair Shores, MI, Monday, September 15, 2008

Of course, to really see what someone values you'd have to see their budget profile across a wide range of wealth levels.

"Everyone thinks they've won the Magical Belief Lottery. Everyone thinks they more or less have a handle on things, that they, as opposed to the billions who disagree with them, have somehow lucked into the one true belief system."

-- R Scott Bakker, Neuropath

1cousin_it
You mean, like every Bayesian believes their prior is correct?
4Liron
Bayesians don't believe they lucked into their priors. They have a reflectively consistent causal explanation for their priors.
1Unknowns
Even if their explanation were correct, they would still have lucked into them. Others have different priors and no doubt different causes for their priors. So those Bayesians would have been lucky, in order to have the causes that would produce correct priors instead of incorrect ones.
4Eliezer Yudkowsky
But that still doesn't need to be luck. I got my priors offa evolution and they are capable of noticing when something works or doesn't work a hundred times in a row. True, if I had a different prior, I wouldn't care about that either. But even so, that I have this prior is not a question of luck.

It is luck in a sense - every way that your opinion differs from someone else, you believe that factors outside of your control (your intelligence, your education, et cetera) have blessed you in such a way that your mind has done better than that poor person's.

It's just that it's not a problem. Lottery winners got richer than everyone else by luck, but that doesn't mean they're deluded in believing that they're rich. But someone who had only weak evidence ze won the lottery should be very skeptical. The real point of this quote is that being much less wrong than average is an improbable state, and you need correspondingly strong evidence to support the possibility. I think many of the people on this site probably do have some of that evidence (things like higher than average IQ scores would be decent signs of higher than normal probability of being right) but it's still something worth worrying about.

8Eliezer Yudkowsky
I think I agree with that: There's nothing necessarily delusive about believing you got lucky, but it should generally require (at least) an amount of evidence proportional to the amount of purported luck.
0cousin_it
Then it would make sense to use some evolutionary thingy instead of Bayesianism as your basic theory of "correct behavior", as Shalizi has half-jokingly suggested.
-3Vladimir_Nesov
Priors can't be correct or incorrect. (Clarified in detail in this comment.)
2PhilGoetz
Sounds mysterious to me. Priors are not claims about the world?
1Vladimir_Nesov
Not quite. They are the way you process claims about the world. A claim has to come in context of a method for its evaluation, but prior can only be evaluated by comparing it to itself...
1Vladimir_Nesov
This downvoting should be accompanied with discussion. I've answered the objections that were voiced, but naturally I can't refute an incredulous stare.
0Nick_Tarleton
The normal way of understanding priors is that they are or can be expressed as joint probability distributions, which can be more or less well-calibrated. You're skipping over a lot of inferential steps.
0Vladimir_Nesov
Right. We could talk of quality of an approximation to a fixed object that is defined as the topic of a pursuit, even if we can't choose the fixed object in the process and thus there is no sense in having preferences about its properties.
1Nick_Tarleton
I can't tell what you're talking about.
0Vladimir_Nesov
Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else's wild guesses about this value as correct or not correct at all. On the other hand, it doesn't make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can't be correct or incorrect, can't be more or less well-calibrated -- talking this way would indicate a conceptual confusion. When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn't make sense for similar reasons. Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn't make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn't itself an approximation, and so can't be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can't be too high or too low.
2RobinZ
I follow you up until you conclude that priors cannot be correct or incorrect. An agent with more accurate priors will converge toward the actual answer more quickly - I'll grant that's not a binary distinction, but it's a useful one.
0Vladimir_Nesov
If you are an agent with "less accurate prior", then you won't be able to recognize a "more accurate prior" as a better one. You are trying to look at the situation from the outside, but it's not possible where we discuss your own decision-making algorithms.
2RobinZ
If I'm blind, I won't be able to recognize a sighted person by sight. That doesn't change the fact that the sighted person can see better than the blind person.
2Vladimir_Nesov
There is no God's view to define the truth, and Faith to attain it. You only get to use your own eyes. If I predict a fair coin will come up "heads", and you predict it'll come up "tails", and it does come up "tails", who was closer to the truth? The truth of such a prediction is not in how well it aligns with the outcome, but in how well it takes into account available information, how well it processes the state of uncertainty. What should be believed given the available information and what is actually true are two separate questions, and the latter question is never asked, as you never have all the information, only some state of uncertainty. Reality is not transparent, it's not possible to glimpse the hidden truth, only to cope with uncertainty. Confuse the two at your own peril.
2RobinZ
I'm so confused, I can't even tell if we disagree. What I am thinking of is essentially the argument in Eliezer Yudkowsky's "Inductive Bias":
0JGWeissman
If you can inspect and analyze your own prior (using your own prior, of course) you can notice that your prior is not reflectively consistent, that you can come up with other priors that your prior expects to get better results. Humans, who are not ideal Bayesians but have a concept of ideal Bayesians, have actually done this. (Though reflective consistency does not guarantee effectiveness. Some priors are too ineffective to notice they are ineffective.)
0Vladimir_Nesov
This might be a process of figuring out what your prior is, but the approximations along the way are not your prior (they might be some priors).
0JGWeissman
I see three priors to track here: 1. The prior I would counterfactually have had if I were not able to make this comparison. 2. The ideal prior I am comparing my approximation of prior (1) to. 3. My actual prior resulting from this comparison, reflecting that I try to implement prior (2), but cannot always compute/internalize it. I have prior (3), but I believe prior (2) is better.
0Vladimir_Nesov
If you have a concept of prior (2), and wish to get better at acting according to it over time, then (2) is your real prior. It is what you (try to) use to make your decisions. (3) is just a tool you employ in the meantime, and you may pick a better tool, judging with (2). I don't know what (1) means (or what (2) means when (1) is realized).
0JGWeissman
(1) is the prior I would have if I had never inspected and analyzed my prior. It is a path not taken from prior (3). The point of introducing it was to point out that I really believe (2) is better than (3), as opposed to (2) is better than (1) (which I also believe, but it isn't the point). Does "your prior" refer to (A) the prior you identify with, or (B) the prior that describes your actual beliefs as you process evidence, or something else? If (A), I don't understand: If (B), I don't understand:
0wnoise
They can be more or less useful, though.
0Vladimir_Nesov
According to what criterion? You'd end up comparing a prior to the prior you hold, with the "best" prior for you just being the same as yours. Like with preference. Clearly not the concept Unknowns was assuming -- you don't need luck to satisfy a tautology.
0[anonymous]
Correspondence to reality. (Do you realize how inferentially far the idea of prior as part of preference is from the normal worldview here?)
0wnoise
Of being better at predicting what happens, of course.
2Vladimir_Nesov
You can't judge based on info you don't have. Based on what you do have, you can do no better than current prior.
1PhilGoetz
But you can go and get info, and then judge, and say, "That prior that I held was wrong." You're speaking as if all truth were relative. I don't know if you mean this, but your comments in this thread imply that there is no such thing as truth. You've recently had other discussions about values and ethics, and the argument you're making here parallels your position in that argument. You may be trying to keep your believes about values, and about truths in general, in syntactic conformance. But rationally I hope you agree they're different.
2Vladimir_Nesov
It is only wrong not to update.
2wnoise
And, of course the priors must be updated the correct way. Nonetheless, it is greatly preferable to have a prior that led to decisions that gave high utility, rather than one that led to decisions that gave low utility. Of course this can't be measured "before hand". But the whole point of updating is to get better priors, in this exact sense, for the next round of decisions and updates.
0wnoise
I am in violent agreement.
3Vladimir_Nesov
Prior can't be judged. It's not assumed to be "correct". It's just the way you happen to process new info and make decisions, and there is no procedure to change the way it is from inside the system.
2cousin_it
Locked in, huh? Then I don't want to be a Bayesian.
3neq1
If someone was locked in to a belief, then they'd use a point mass prior. All other priors express some uncertainty.
1Vladimir_Nesov
Since you are already locked in in some preference anyway, you should figure out how to compute within it best (build a FAI).
3cousin_it
What makes you say that? It's not true. My preferences have changed many times.
3Vladimir_Nesov
Distinguish formal preference and likes. Formal preference is like prior: both current beliefs and procedure for updating the beliefs; beliefs change, but not the procedure. Likes are like beliefs: they change all the time, according to formal preference, in response to observations and reflection. Of course, we might consider jumping to a meta level, where the procedure for updating beliefs is itself subject to revision; this doesn't really change the game, you've just named some of the beliefs changing according to fixed prior "object-level priors", and named the process of revising those beliefs according to the fixed prior "process of changing object-level prior". When formal preference changes, it by definition means that it changed not according to (former) formal preference, that is something undesirable happened. Humans are not able to hold their preference fixed, which means that their preferences do change, what I call "value drift". You are locked in in some preference in normative sense, not factual. This means that value drift does change your preference, but it is actually desirable (for you) for your formal preference to never change.
3cousin_it
I object to your talking about "formal preference" without having a formal definition. Until you invent one, please let's talk about what normal humans mean by "preference" instead.
0Vladimir_Nesov
I'm trying to find a formal understanding of a certain concept, and this concept is not what is normally called "preference", as in "likes". To distinguish from the word "preference", I used the label "formal preference" in the above comment to refer to this concept I don't fully understand. Maybe the adjective "formal" is inappropriate for something I can't formally define, but it's not an option to talk about a different concept, as I'm not interested in a different concept. Hence I'm confused about what you are really suggesting by For the purposes of FAI, what I'm discussing as "formal preference", which is the same as "morality", is clearly more important than likes.
2cousin_it
I'd be willing to bet money that any formalization of "preference" that you invent, short of encoding the whole world into it, will still describe a property that some humans do modify within themselves. So we aren't locked in, but your AIs will be.
2Vladimir_Nesov
Do humans modify that property, or find it desirable to modify it? The distinction between factual and normative is very important here, since we are talking about preference, the pure normative. If humans prefer different preference from a given one, they do so in some lawful way, according to some preference criterion (that they hold in their minds). All such meta-steps should be included. (Of course, it might prove impossible to formalize in practice.) As for the "encoding the whole world" part, it's the ontology problem, and I'm pretty sure that it's enough to encode preference about strategy (external behavior, given all possible observations) of a given concrete agent, to preserve all of human preference. Preference about external world or the way the agent works on the inside is not required.
0[anonymous]
What makes you say that Bayesians are locked in? It's not true. If they're presented with evidence for or against their beliefs, they'll change them.
0PhilGoetz
You're talking about posteriors. They're talking about priors, presumably foundational priors that for some reason aren't posteriors for any computations. An important question is whether such priors exist.
2[anonymous]
But your beliefs are your posteriors, not your priors. If the only thing that's locked in is your priors, that's not a locking-in at all.
0PhilGoetz
That's not obvious. You'd need to study many specific cases, and see if starting from different priors reliably predicts the final posteriors. There might be no way to "get there from here" for some priors. When we speak of the values that an organism has, which are analogous to the priors an organism starts with, it's routine to speak of the role of the initial values as locking in a value system. Why do we treat these cases differently?
2wnoise
That's obviously true for priors that initially assign probability zero somewhere. But as Cosma Shalizi loves pointing out, Diaconis and Freedman have shown it can happen for more reasonable priors too, where the prior is "maladapted to the data generating process". This is of course one of those questionable cases with a lot of infinities being thrown around, and we know that applying Bayesian reasoning with infinities is not on fully solid footing. And much of the discussion is about failure to satisfy Frequentist conditions that many may not care about (though they do have a section arguing we should care). But it is still a very good paper, showing that non-zero probability isn't quite good enough for some continuous systems.
0Jack
I have heard some argue for adjusting priors as a way of dealing with deductive discoveries since we aren't logically omniscient. I think I like that solution. Realizing you forgot to carry a digit in a previous update isn't exactly new information about the belief. Obviously a perfect Bayesian wouldn't have this issue but I think we can feel free to evaluate priors given that we are so far away from that ideal.
0Scott Alexander
But one man's prior is another man's posterior: I can use the belief that a medical test is 90% specific when using it to determine whether a patient has a disease, but I arrived at my beliefs about that medical test through Bayesian processes - either logical reasoning about the science behind the test, or more likely trying the test on a bunch of people and using statistics to estimate a specificity. So it may be mathematically wrong to tell me my 90% prior is false, but the 90% prior from the first question is the same 90% posterior from the second question, and it's totally kosher to say that the 90% posterior from the second question is wrong (and by extension, I'm using the "wrong prior") The whole reflective consistency thing is that you shouldn't have "foundational priors" in the sense that they're not the posterior of anything. Every foundational prior gets checked by how well it accords with other things, and in that sense is sort of a posterior. So I agree with cousin_it that it would be a problem if every Bayesian believed their prior to be correct (as in - they got the correct posterior yesterday to use as their prior today).
0JGWeissman
Vladimir is using "prior" to mean a map from streams of observations to probability distributions over streams of future observation, not the prior probability before updating. Follow the link in his comment.
[-]anonym270

Everything is vague to a degree you do not realize till you have tried to make it precise.

Bertrand Russell

3RobinZ
Note: phaedrus has provided a citation to "The Philosophy of Logical Atomism", noting that this quote is only part of the sentence.
6phaedrus
Thanks RobinZ, The full quote is "Everything is vague to a degree you do not realize till you have tried to make it precise, and everything precise is so remote from everything that we normally think, that you cannot for a moment suppose that is what we really mean when we say what we think." But the partial quote is much more crisp.
2anonym
Oooh, thanks to RobinZ and phaedrus! I hadn't seen the second part, and didn't have the citation.
[-]Rain270

The important work of moving the world forward does not wait to be done by perfect men.

-- George Eliot

[-]Rain230

Any technique, however worthy and desirable, becomes a disease when the mind is obsessed with it.

-- Bruce Lee

When I look around and think that everything's completely and utterly fucked up and hopeless, my first thought is "Am I wearing completely and utterly fucked up and hopeless-colored glasses?"

Crap Mariner (Lawrence Simon)

3sixes_and_sevens
The opposite of rose-tinted spectacles: shit-tinted shades.
[-]RobinZ200

You don't have to believe everything you think.

Seen on bumper sticker, via ^zhurnaly.

This is more important than it looks. Most people's beliefs are just recorded memes that bubbled up from their subconscious when someone pressed them for their beliefs. They wonder what they believe, their mind regurgitates some chatter they heard somewhere, and they go, "Aha, that must be what I believe." Unless they take special countermeasures, humans are extremely suggestible.

2phaedrus
"It is the mark of an educated mind to be able to entertain a thought without accepting it." --- Aristotle
[-]Thomas200

Wandering in a vast forest at night, I have only a faint light to guide me. A stranger appears and says to me: 'My friend, you should blow out your candle in order to find your way more clearly.' The stranger is a theologian.

  • Denis Diderot
7Pfft
But blowing out the candle actually would make it easier to find your way (it ruins your night vision).
4James_K
Not if the forest is sufficiently dark that your night vision doesn't have enough light to work with.
1Zubon
That seems like an easy case to test, provided you have some way to re-light the candle.
0roundsquare
You need to make two assumptions for the analogy. 1) You can't re-light the candle. 2) If you do things exactly right, you'll get out with just before starving to death (or dying somehow) otherwise, you are dead.

"Institutions will try to preserve the problem to which they are the solution."

-- Clay Shirky

What can be asserted without evidence can be dismissed without evidence.

-- Christopher Hitchens

9Oscar_Cunningham
Well, clearly we can assert anything we want, so the quote becomes: And we notice that evidence doesn't change depending on whether you're considering something for belief or dismissal, so the quote becomes: So Hitchens is really telling us that prior probabilities tend to be small, which is true since there are almost always many possible hypotheses that the probability mass is split between.
1[anonymous]
You're assuming that probability mass tends to be split between stuff. This would be true, if all interesting statements were mutually exclusive or something. But consider the hypothesis that at least one statement in the Bible is true. This hypothesis is very complex, and yet its prior probability is very large.
3Lightwave
One thing that bugs me about this quote is that it isn't strong enough. It might give people the impression that it's up to the reader's opinion or personal preference to decide what to believe or not believe. They're allowed to believe in something they have no evidence for, you're allowed to dismiss it, everyone's happy.
1Jonathan_Graehl
Accuracy was sacrificed for a pleasant parallel construction. Anything can be so asserted.
5Strange7
And, without supporting evidence, such assertions demonstrate nothing.

What can be asserted without evidence can be dismissed without evidence.

-- Christopher Hitchens

Accuracy was sacrificed for a pleasant parallel construction. Anything can be so asserted.

And, without supporting evidence, such assertions demonstrate nothing.

The mere fact that an assertion has been made is, in fact, evidence. For example, I will now flip a coin five times, and assert that the outcome was THHTT. I will not provide any evidence other than that assertion, but that is sufficient to conclude that your estimate of the probability that it's true should be higher than 1/2^5. Most assertions don't come with evidence provided unless you go looking for it. If nothing else, most assertions have to be unsupported because they're evidence for other things and the process has to bottom out somewhere.

Now, as a matter of policy we should encourage people to provide more evidence for their assertions wherever possible, but that is entirely separate from the questions of what is evidence, what evidence is needed, and what is demonstrated by an assertion having been made.

6Jack
Well the evidence here isn't really "the fact that it has been asserted" but "the fact that it has been asserted in a context where truthfulness and authority are usually assumed". The assertion itself doesn't carry the weight. If we're playing poker and in the middle of a big hand I tell you "I have the best hand possible, you should fold." that isn't evidence of anything since it has been asserted in a context where assumptions about truthfulness have been flung out the window.
-1Jordan
Or it's sufficient to conclude that one's estimate should be less than 1/2^5. Without providing additional evidence (such as "I saw the THHTT outcome") your claim is rather dubious and -- in the realm of humans -- this probably is a good indicator that you are lying or are crazy. I'm not sure how one should update your posteriors.
3[anonymous]
Suppose I tell you that my password is D!h98+3(dkE4. Do you conclude that since I don't want you to know my password, I must be trying to mislead you as to what my password is, and so the probability that this is my password is actually less than 1/95^12? If I assert that the outcome as THHTT, either I'm lying or I'm not lying, and there's little evidence either way. What little evidence there is probably doesn't push my probability of telling the truth below 3%, and surely the strength of the evidence has little, if anything, to do with the prior probability of the coin showing THHTT.
3Jordan
Good point. Thanks for batting down my idiocy here, much obliged =D
3Psychohistorian
"There are no married bachelors."
3SilasBarta
Tom and Sue, acquaintances through friends of theirs, got legally married, with no ceremony, in order for Tom to avoid being drafted to fight in a war. They barely know each other. They have not spoken to each other in a long time and (obviously) have no children. Neither wears a wedding ring. They plan to void the marriage as soon as the laws allow, with no further transfer of property between them. Tom is a married bachelor. ---------------------------------------- There's a reason the term "bachelor" exists, and it's not to make Kant right.
2Jack
This just looks like an instance of using contradictory language to indicate that Tom fits the the conventional definitions of neither a bachelor or a married man. You could also say Tom is a single spouse. Bachelor happens to have connotations of referring to lifestyle rather than legal status which makes your meaning plainer. The fact that language is flexible enough to get around logic doesn't mean married bachelor isn't a logical contradiction or that Kant is wrong.
6SilasBarta
My point is that we have words because they call out a useful, albeit fuzzy, blob of conceptspace. We may try to claim that two words mean the same thing, but if there are different words, there's probably a reason -- because we want to reference different concepts ("connotations") in someone's mind. It's important to distinguish between the concepts we are trying to reference, vs. some objective equivalence we think exists in the territory. The territory actually includes minds that think different thoughts on hearing "unmarried" vs. "bachelor". ETA: My point regarding Kant was this: He should have seen statements like "All bachelors are unmarried" as evidence regarding how humans decide to use words, not as evidence for the existence of certain categories in reality's most fundamental ontology.
0Tyrrell_McAllister
By "certain categories in reality's most fundamental ontology", do you mean the synthetic/analytic distinction? He wouldn't consider that distinction to be part of reality's most fundamental ontology. He would disavow any ability to get at "fundamental reality", which he would consider to be intrinsically out of reach, locked away in the inaccessible numinous. Actually, he would say something very close to what you wrote when you said that he "should have seen statements like 'All bachelors are unmarried' as evidence regarding how humans decide to use words". What he would say is that the statement is evidence regarding how humans have decided to build a certain concept out of other concepts. If you affirm the assertion "All bachelors are unmarried" to yourself, then what you are doing, on Kant's view, is inspecting the concept "bachelor" in your own mind and finding the concept "unmarried" to be among its building blocks. The assertion is analytic because one confirms it to oneself in this way. Analyticity doesn't have to do with what the things you call bachelors are like in and of themselves. So it's not about fundamental reality. Rather, analysis is the act of inspecting how a concept is put together in your mind, and analytic assertions are just assertions that analysis can justify, such as that one concept is part of another concept. Kant would even allow that you could make a mistake while carrying out this inspection. You might think that "unmarried" was one of the original pieces out of which you had built "bachelor", when in fact you just now snuck in "unmarried" to form some new concept without realizing it. That is, you might have just unknowingly carried out an act of synthesis. Kant would say, though, that you can reach effective certainty if you are sufficiently careful, just as you can reach effective certainty about a simple arithmetical sum if you perform the sum with sufficient care. [The above is just to clarify Kant's claims, not to endorse
0Jack
I don't disagree with anything here.
0SilasBarta
Rockin'. I'd tie the point back to the original quotation, but I'm losing interest now and actually kind of busy...
0Psychohistorian
This is just playing with connotations. A bachelor is an unmarried man, so one could say that Tom acts like a bachelor despite being married. He is not a bachelor, though. To show this has a practical implication, assume Tom met Mary: the two could not get married immediately. If he were a bachelor, they could. He therefore lacks necessary properties of bachelorness (most significantly, not being married), and cannot be a bachelor, even if he may live his life much as a bachelor would.
2Tiiba
My dad has a Bachelor's degree.
0RolfAndreassen
Is he married?
1Tiiba
Yes, to mom.
1Psychohistorian
"There are no married unmarried men." I add this grudgingly, as deliberately seeking ambiguity in a clear sentence is just being fatuous; it's not a valid objection.
-4[anonymous]
.
1Psychohistorian
I was wrong. On further reflection, this is a failed attempt to refute this point, though I don't think the ensuing discussion of Kant actually gets to why. If you're familiar with the definition of bachelor, then this statement equates to, "There are no unmarried married men." Any statement of the form "No A are not-A" is completely uninformative. As it can be decided a priori for any consistent value of A, stating it demonstrates nothing. If you aren't clear on the meaning of bachelor, then this statement would require a citation of the definition in order to be convincing. This would constitute supporting evidence, and it would serve to demonstrate the meaning of "bachelor." Thus, this does not go to refute the claim that an assertion without supporting evidence demonstrates nothing, as that is clearly the case here.
[-]Rain180

If trees could scream, would we be so cavalier about cutting them down? We might, if they screamed all the time, for no good reason.

-- Jack Handey's Deep Thoughts

"All things end badly - or else they wouldn't end"

  • Brian Flanagan (Tom Cruise), Cocktail, 1988. He was referring to relationships, but it's actually a surprisingly general rule.
4Zubon
Almost all relationships end in unhappiness or death. Or unhappiness leading to death.
[-]RobinZ150

Blind alley, though. If someone's ungrateful and you tell him he's ungrateful, okay, you've called him a name. You haven't solved anything.

Robert Pirsig, Zen and the Art of Motorcycle Maintenance

[-]Rain150

The word agnostic is actually used with the two distinct meanings of personal ignorance and intrinsic unknowability in the same context. They are distinguished when necessary with a qualifier.

WEAK agnosticism: I have no fucking idea who fucked this shit up.
STRONG agnosticism: Nobody has any fucking idea who fucked this shit up.

There is a certain confusion with weak atheism which could (and frequently does) arise, but that is properly reserved for the category of theological noncognitivists,

WEAK atheism: What the fuck do you mean with this God shit?
STRONG atheism: Didn't take any God to fuck this shit up.

which is different again from weak theism.

WEAK theism: Somebody fucked this shit up.
STRONG theism: God fucked this shit up.

An interesting cross-categorical theological belief not easily represented above is

DEISM: God set this shit up and it fucked itself.

-- Snocone, in a Slashdot post

0Oscar_Cunningham
Could someone explain why this has been voted up so much? I didn't find it particularly funny, or to have any non-trivial insight.
6Bo102010
It shoehorns the use of giggle-inducing curse words into an explanation of religious views. Someone who has only ever been exposed to Beavis and Butthead cartons, and has never heard about "agnosticism," might be able to learn from this type of explanation.
0Rain
It presents a quick and easy, bullet-point spectrum of belief, which many people may not know exists. An anecdotal data point: I linked to this quote when talking to a friend who was using me to vent their anti-theist ideas since they didn't have many other outlets for such thoughts. They laughed, and were able to properly categorize their beliefs (weak atheist) for the first time, rather than thinking themselves some kind of heretic (evil atheist). That said, I didn't expect it to be this popular, either.

An atheist walked into a bar, but seeing no bartender he revised his initial assumption and decided he only walked into a room.

http://friendlyatheist.com/2008/02/29/complete-the-atheist-joke-1/

2Waldheri
My initial response was to chuckle, but when my analytical capacities kicked in a moment later I was disappointed. If his initial assumptions was that he was walking into a bar, does that make him atheist in this metaphor? Substitute "walked into a bar" by "believed there is a god", the thing I assume it is a metaphor of. You will see it makes no sense.
3AlexMennen
Many atheists were formerly theists. Still, I suppose it might have been better as "A scientist walked into what he thought was a bar, but seeing no bartender, barstools, or drinks, he revised his initial assumption and decided he only walked into a room."
3roundsquare
I think it makes sense, as a poke at atheists. Think about it this way. You walk into a bar, and you see no bartender. In your mind, you say "anything that is a bar will have a bartender. No bar tender, not a bar." Of course, the best thing to do before revising your assumptions is to wait for a bar tender. Maybe he/she is in the bathroom. Similarly, if you claim there is no evidence of god that I've seen in my lifetime, you are using the wrong measure. Why should god (if there is one) make itself obvious during the short period that is a human lifetime. This is almost an "irrationality quote" instead of a rationality quote, but still enlightening.
3RobinZ
I was with you up until the "similarly". After that you start privileging the hypothesis - you should expect a god to make itself obvious during a human lifetime, by any description of a god ever proposed in history.
0roundsquare
I'm not sure I see how I"m privileging the hypothesis. Not saying that I'm not, but if you can explain how I'd appreciate it. Aside from that, I think you are using "god" to mean any of the gods discussed by any popular religion. By this definition, I'd probably agree with you. I was using the word "god" in a much more general sense... not sure I can define it though, probably something similar to: any "being" that is omnipotent and omniscient, or maybe: any "being" that created reality as we know it. In either definition, there is not really a reason to expect got to make itself obvious to us on any timescale that we consider reasonable. There is no reason to believe that we are special enough that we'd get that kind of treatment.
1RobinZ
There is no reason to propose such a being - privileging the hypothesis is when you consider a hypothesis before any evidence has forced you to raise that hypothesis to the level of consideration. Unless you have a mountain of evidence (and I'm guessing it'll have to be cosmological to support a god that hasn't visibly intervened in the world) already driving you to argue that there might be a god, don't bother proposing the possibility.
0roundsquare
Ah, I see what you are saying. Thanks for the explanation. And you are indeed correct.

Do not imagine that mathematics is hard and crabbed, and repulsive to common sense. It is merely the etherealization of common sense.

WIlliam Thomson, Lord Kelvin

3gwern
One I got while reading Jaynes's Probability Theory recently: -- Laplace
[-]djcb140

The white line down the center of the road is a mediator, and very likely it can err substantially towards one side the other before the disadvantaged side finds advantage in denying its authority.

Source:
-- Schelling, Strategy of conflict, p144

[The book was mentioned a couple of times here on LW, and is a nice introduction to the use of game theory in geopolitics]

It is always advisable to perceive clearly our ignorance.

Charles Darwin, "The Expressions of the Emotions in Man and Animals", ch.3.

"Torture the data long enough and they will confess to anything."

--via The Economist, "a saying of statisticians".

5gwern
--von Neumann
0RobinZ
I like it, but do you have an issue number?
4Mass_Driver
My father's been saying that as long as I can remember; he hasn't taken a statistics class since '82.
0RobinZ
Never mind, then!
2MichaelGR
Here is the piece I got it from: http://www.economist.com/specialreports/displaystory.cfm?story_id=15557465
0RobinZ
"A different game: Information is transforming traditional businesses", Feb 25th 2010 - thanks!

Are the winners the only ones actually writing the history? We need to disabuse ourselves of this habit of saying things because they sound good. ----- Ta-Nehisi Coates

Coates runs a popular culture, black issues, and history blog with a very strong rationalist approach.

[Discarding game] theory in favor of some notion of collective rationality makes no sense. One might as well propose abandoning arithmetic because two loaves and seven fish won't feed a multitude. -- Ken Binmore, Rational Decisions

4cousin_it
I'm a big fan of Ken Binmore, and this quote captures a lot of my dissatisfaction with LW's directions of inquiry. For example, it's more or less taken for granted here that future superintelligent AIs should cooperate on the Prisoner's Dilemma, so some of us set out to create a general theory of "superintelligent AIs" (including ones built by aliens, etc.) that would give us the answer we like.
2Zubon
Would it be correct to say you mean "should" in the wishful thinking sense of "we really want this outcome," rather than something normative or probabilistic?
0cousin_it
Good question. The answer's yes, but now I'm wondering whether we really should expect alien-built AIs to be cooperators. I know Eliezer thinks we should.
1Baughn
That is not the impression I got from the story. The baby-eaters were cooperators, yes; they were also stated to be relatively similar to humanity except for their unfortunate tendency to eat preteens. The other ones, though? I didn't see them do anything obviously cooperative, but I did see a few events that'd argue against it. The overall impression I got was that we really can't be sure, except that it might be unlikely for both sides of a contact to come out unscathed.
0Nanani
Typo-hunt: should read "abandoning arithMetic" (without the capital of course)
0Nic_Smith
Fixed.
[-]aausch110

Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.

-- Gautama Buddha

4ata
I like to point out that spreading this quote is an example of violating it: Buddha never said that. I'm not sure who did originally write it, but it's not found in any Buddhist primary source. "Do not believe in anything simply because it is spoken and rumored by many!" I've heard it might be a rough paraphrase of a quote from the Kalama Sutta, but in its original form, it would not qualify as a "rationality quote"; it's more a defense of belief in belief, advising people to accept things as true based on whether believing it is true tends to increase one's happiness. Edit: See RichardKennaway's reply; he is correct about this one. I think I was thinking of a different quote along similar lines.
7Richard_Kennaway
What is a Buddhist primary source? None of the discourses were written down until some centuries after the Buddha's time. The discourses that we have do themselves exist and whatever their provenance before the earliest extant documents, they are part of the canon of Buddhism. The canon has accreted layers over the centuries, but the Kalama Sutta is part of the earliest layer, the Tripitaka. You've heard? That it might be? :-) It is readily available online in English translation. It attributes these words directly to the Buddha: and in another translation: If I had the time, I'd be tempted to annotate the passage with LessWrong links. ETA: For the second translation, the corresponding paragraph is actually the one preceding the one I quoted. The sutta in fact contains three paragraphs listing these ten faulty sources of knowledge. Buddhist scriptures are full of repetitions and lists, probably to assist memorisation. ETA2: Rationalist version: Do not rest on weak Bayesian evidence, but go forth and collect strong.
2Jack
Great catch. Upvoted. I actually don't think this is right though. I'm pretty sure the original form is about the importance of personal knowledge from direct experience. I think the wikipedia article makes this clear, actually. I suppose you're taking your reading from: But the emphasis here should be on "when you yourselves know", not "these things lead to benefit and happiness". Keep in mind the kind of teachings being addressed are often strategies for happiness so it makes sense to be concerned with whether or not a teaching really does increase happiness. I don't see why we can't take it as an injunction to trust only experiment and observation. It seems about right to me. (ETA: Except of course he's talking about meditation not experiment and ignores self-deception, placebo effect, brain diversity and the all important intersubjective confirmation, but I'll take what I can get from the 5th century B.C.E.)

"It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." ~William Kingdon Clifford

This is the quote that got me thinking about rationality as something other than "a word you use to describe things you believe so that you can deride those who disagree with you."

2RobinZ
One of the most insidious sources of confusion, I find, is the distinction between the meaning of a word and its most frequent uses. It ties into the whole "Applause Lights" phenomenon, particularly "Fake Norms". P.S. Belatedly: Welcome to Less Wrong! Feel free to introduce yourself in that thread.

Gall's Law:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

John Gall, "Systemantics"

8Peter_de_Blanc
The "inverse proposition" given is actually the contrapositive of (i.e. is equivalent to) the original statement.
6Eliezer Yudkowsky
Counterexample: Space shuttle.

Evolved from both simpler winged aircraft and simpler rockets.

All the base components that went into the space shuttle still existed on a line of technogical progress from the basic to the advanced. Actually, the space shuttle followed Gall's Law precisely.

The lift mechanism was still vertically stacked chemical rockets of the sort that had already flown for decades. The shuttle unit was built from components perfected by the Gemini and Apollo programs, and packed into an aerodynamic form based on decades of aircraft design.

Reducing technologically, the shuttle still depends on simple systems like airfoils, rockets and nozzles, gears, and other known quantities.

Then if that qualifies, what would falsify Gall's Law?

8NMJablonski
Further reply: I was contemplating this exchange and wondering whether Gall's Law has any value (constrains expected experience). I think it does. If an engineer today claimed to have successfully designed an Albucierre engine, I would probably execute an algorithm similar to Gall's Law and think: The technology does not yet exist to warp space to any degree, nor is there an existing power source which could meet the needs of this device. The engineer's claim to have developed a device which can be bound to a craft, controllably warp space, and move it faster than light is beyond existing technological capability. We are too many Gall Steps away for it to be probable.
3NMJablonski
The first development of the electronic circuit would have been a case of a complex technological system that worked, but was not based fundamentally upon existing simpler machines. The first use of chemical propulsion - gunpowder / rocketry - might have been a similar case. (EDIT: Upon further consideration, chemical propulsion is based upon the simpler technologies of airtight confinement and incendiary materials. However, I still think the electronic circuit was effectively the rise of a new fundamental device with complex behavior unconnected to more basic technologies. If anyone thinks they can reduce the circuit to simpler working devices I would be fascinated to explore that.) It's a good question. I'm turning over various possibilities in my mind. Do you still hold that the space shuttle falsifies it? If so, I'd be interesting in hearing your reasoning, and other examples you consider similar.
4Strange7
Electroplating and electrolysis of water both involve a circuit, but aren't overwhelmingly complex. Samuel Thomas von Sommering's electrochemical telegraph was based on electrolysis. It's not like someone pulled doped silicon semiconductors straight out of the lightning-struck sand.
0NMJablonski
True, +1 for a thoughtful answer. However, I still don't see the circuit as reducible to simpler working components. Regardless of the medium across which the current flows, it still seems to me that the circuit is a simple machine - a basic device like the pulley, joint, inclined plane, or lever. In considering this, I also think that chemical fuels are simple machines and belong on that list, as they are ostensibly devices (can be used by an agent to do work) but also aren't reducible to simpler working components.
1Tyrrell_McAllister
Basically, the shuttle is a system of rockets carrying a space-worthy airplane as payload. Both of these components had predecessors. Had the shuttle been the first rocket or first space-worthy airplane, it would have falsified Gall's Law.
1NMJablonski
I'm not sure. Isn't the first rocket or airplane also built on simple technologies? Couldn't one continue to reduce components to simpler devices until you get to basic joints, inclined planes, tensors (springs), incendiary materials (fuel), etc - that all would have had to be developed and understood before an engineer could design the rocket / airplane? (EDIT: I realize that I'm essentially positing that Gall's Law holds if all technology should be reducible to simple machines, and that what we call "technology" is improving, refining, and combining those designs.)
2Tyrrell_McAllister
I'm not saying that the first rocket and first airplane falsified Gall's Law. I'm saying that, had the space shuttle, in the form in which it was actually built, been the first rocket or the first airplane, it would have falsified Gall's Law.
7gregconen
Suppose a hyperintelligent alien race did build a space shuttle equivalent as their first space-capable craft, and then went on to build interplanetary and interstellar craft. Alien 1: The [interstellar craft, driven by multiple methods of propulsion and myriad components] disproves Gall's Law. Alien 2: Not at all. [Craft] is a simple extension of well-developed principles like the space shuttle and the light sail. You can simply define a "working simple system" as whatever you can make work, making that a pure tautology.
6Emile
I would say that Gall's Law is about the design capacities of human beings (like Dunbar's Number), or is something like "there's a threshold to how much new complexity you can design and expect to work", with the amount of complexity being different for humans, superintelligent aliens, chimps, or Mother Nature. (the limit is particularly low fo Mother Nature - she makes smaller steps, but got to make much more of them)
1gregconen
That's not my point. My point is that Gall's law is unfalsifiable by anything short of Omega converting its entire light cone into computronium/utilium in a single, plank-time step. Edit: Not to say that Gall's Law can't be useful to keep in mind during engineering design.
2NMJablonski
I agree. All of these concepts are imprecisely connected to the real world. Does anyone have an idea for how we could more precisely define Gall's Law to more ably discuss real expected experience? I'm considering a definition which might include the phrase: "Reducible to previously understood components"
3gwillen
I think the key insight here is that you get a limited number of bits, in design space, to bridge between things that have already been shown to work, and things that have yet to be shown to do so. For purposes of Gall's law, we are interested in the number of bits of design that went into the space shuttle without ever having been previously shown to work. So you have to subtract off the complexity of "the idea of an airplane", which we already had, and of the solid fuel booster rockets, which we already knew how to build; and also of any subassembly which got built and tested successfully in a lab first -- but perhaps leaving some bits or fraction of a bit to account for the unknown environment when using them on the real shuttle, versus in the lab.
1Tyrrell_McAllister
That is a very helpful way to put it: "Gall's Law" is the claim that there is this limited number of bits. Of course, put so clearly, it looks kind of trivial, so I think that we should read Gall as further saying that you can get a reasonable intuitive bound on this limit by just looking at the history of innovation, but that people often propose designs when a little reasonable reflection would have shown them that they are proposing to step far beyond this limit.
0NMJablonski
This is an excellent idea - quantizing bits of design information. It would also demonstrate that if a designer started at the "space shuttle" level of complexity, and layed out a rough design, that design would probably change drastically as the components were built and tested, and the designer collected more bits of information about how to make the complex system work.
0NMJablonski
Ah, I understand. Total agreement.
-1soreff
The Columbia shuttle crew would still be with us if this were correct.
0NMJablonski
True, the space shuttle was not completely contained on its vertical axis, but I was talking about the boosters themselves. I said the lift mechanism was a vertically stacked chemical rocket, not that the entire shuttle was a uniform tower, as it obviously wasn't. The boosters are components of the space shuttle, which is what we were talking about: simpler working components evolving into complex systems. Simple working component: Rocket booster Complex system: Shuttle with a crew module, fuel tanks, and multiple boosters
8RolfAndreassen
In addition to NMJablonski's point, it is perhaps arguable just how well the Space Shuttle worked. In hindsight it seems that the same amount of orbital lift capacity could have been done rather more cheaply.
3JulianMorrison
It works for a job it isn't used for: launching into a polar orbit to emplace secret military satellites, and gliding a very long distance back to base without a need for a splashdown recovery that might risk its secrecy. That's what gave it the wings, and once you have the wings the rest of the design follows.
7cousin_it
It doesn't qualify 100%, because there were little prototype shuttles. Still, you have a point. If we have good theories, we can build pretty big systems from scratch. Gall's law resonates especially strongly with programmers because much of programming doesn't have good theories, and large system-building endeavors fail all the time.
3NMJablonski
Even if there hadn't been prototype shuttles, the shuttle is still reducible to simpler components. Gall Law just articulates that before you can successfully design something like the space shuttle you have to understand how all of its simpler components work. If an engineer (or even transhuman AI) had sat down and started trying to design the space shuttle, without knowledge of rocketry, aerodynamics, circuits, springs, or screws, it would be pulling from a poorly constrained section of the space of possible designs, and is unlikely to get something that works. The way this problem is solved is to work backwards until you get to simple components. The shuttle designer realizes his shuttle will need wings, so starts to design the wing, realizes the wing has a materials requirement, so starts to develop the material. He continues to work back until he gets to the screws and rivets that hold the wing together, and other simple machines. In engineering, once you place the first atom in your design, you have already made a choice about atomic mass and charge. Complex patterns of atoms like space shuttles will include many subdivisions (components) that must be designed, and Gall's Law illustrates that they must be designed and understood before the designer has a decent chance of the space shuttle working.
4cousin_it
I think you completely miss the point of Gall's law. It's not about understanding individual components. Big software projects still fail, even though we understand if-statements and for-loops pretty well.
1NMJablonski
I know that. It's about an evolution from simpler systems to more complex systems. Various design phases of the space shuttle aren't what falsify that example. It's the evolution of rocket propulsion, aircraft, and spacecraft, and their components. (EDIT: Also, at no point was I suggesting that understanding of components guarantees success in designing complex systems, but that it is neccessary. For a complex system to work it must have all working components, reduced down to the level of simple machines. Big software projects would certainly fail if the engineers didn't have knowledge of if-statements and for-loops.)
3kodos96
Really? I think only 6 of them were built, and 2 of those suffered catastrophic failure with all hands lost.
0Lightwave
Counterexample: a complex computer program designed and written from scratch.

I've written some of those. And every time, I test everything I write as I go, so that at every stage from the word go I have a working program. The big bang method, of writing everything first, then running it, never works.

1sketerpot
The "big bang" sometimes happens to me when I write in Haskell. After I fix all the compiler errors, of course. I just wish there were a language with a type system that can detect almost as many errors as Haskell's without having quite such a restrictive, bondage-fetish feel to it. But yeah, in general, only trivial programs work the first time you run them. That's a good definition of trivial, actually.
7pjeby
...and that worked the very first time? How often does that happen? The quote is a rule of thumb and an admonition to rational humility, not a law of the universe.
3Lightwave
Well "never works and cannot be made to work" does sound a bit strong to me.
3NMJablonski
I agree it's probably not a law of the universe, as I cannot rule out possible minds that could falsify it. However, I cannot from within my mind (human capabilities) see a case where a complex system could work before each of its parts had been made to work.
[-]Piglet100

"Face the facts. Then act on them. It's the only mantra I know, the only doctrine I have to offer you, and it's harder than you'd think, because I swear humans seem hardwired to do anything but. Face the facts. Don't pray, don't wish, don't buy into centuries-old dogma and dead rhetoric. Don't give in to your conditioning or your visions or your fucked-up sense of... whatever. FACE THE FACTS. THEN act."

--- Quellcrist Falconer, speech before the assault on Millsport. (Richard Morgan, Broken Angels)

-1[anonymous]
My personal favorite from this trilogy is the whole "They say it's not personal, it's business. Well it's personal for us. And we must make it personal for them." (paraphrased)

"In the animal kingdom, the rule is, eat or be eaten; in the human kingdom, define or be defined."

Thomas Szaz

[-]anonym100