Another monthly installment of the rationality quotes thread. The usual rules apply:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
  • No more than 5 quotes per person per monthly thread, please.
New Comment
285 comments, sorted by Click to highlight new comments since: Today at 10:05 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either "f" or "d" and would predict which key they were going to push next. It's actually very easy to write a program that will make the right prediction about 70% of the time. Most people don't really know how to type randomly. They'll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model. Even a very crude one will do well. I couldn't even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he "just used his free will."

-- Scott Aaronson

Holy Belldandy, it sounds like someone located the player character. Everyone get your quests ready!

Woah, I'd better implement Phase One of my evil plan if it's going to be ready in time for the hero to encounter it.

Omg, what do I do?! I can't find my random encounter table!

Just act life-like!

Don't worry, you are the random encounter.
And rewards.
??? Someone located his inner d2!

My bet is that the student had many digits of pi memorised and just used their parity.

I would have easily won that game (and maybe made a quip about free will when asked how...). All you need is some memorized secret randomness. For example, a randomly generated password that you've memorized, but you'd have to figure out how to convert it to bits on the fly.

Personally I'd recommend going to, generating a few hexadecimal bytes (which are pretty easy to convert to both bits and numbers in any desired range), memorizing them, and keeping them secret. Then you'll always be able to act unpredictably.

Well, unpredictably to a computer program. If you want to be able to be unpredictable to someone who's good at reading your next move from your face, you would need some way to not know your next move before making it. One way would be to run something like an algorithm that generates the binary expansion of pi in your head, and delaying calculating the next bit until the best moment. Of course, you wouldn't actually choose pi, but something less well-known and preferably easier to calculate. I don't know any such algorithms, and I guess if anyone knows a good one, they're not likely to share. But if it was something like a pseudorandom bitstream generator that takes a seed, it could be shared, as long as you didn't share your seed. If anyone's thought about this in more depth and is willing to share, I'm interested.

That's awesome, thanks.
Awesome. I tried doing that when I was a child but naturally failed.

When I need this I just look at the nearest object. If the first letter is between a and m, that's a 0. If it's between n and z, that's a 1. For larger strings of random bits, take a piece of memorized text (like a song you like) and do this with the first letter of each word.

9Said Achmiz10y
There's an easier way: look at the time. Seconds are even? Type 'f'. Odd? Type 'd'. (Or vice-versa. Or use minutes, if you don't have to do this very often.) A while ago there was an article (in NYTimes online, I think) about a program that could beat anyone in Rock-Paper-Scissors. That is, it would take a few iterations, and learn your pattern, and do better than chance against you. It never got any better than chance against me, because I just used the current time as a PRNG. Edit: Found it. [] Edit2: Over 25 rounds, 12-6-7 (win-loss-tie) vs. the "veteran" computer. Try it and post your results! :)
Over 12 rounds against the veteran computer, I managed 5-4-3, just trying to play "counterintuitively" and play differently from how I expected the players whose information it aggregated would play. Not enough repetitions to be highly confident that I could beat the computer in the long term, but I stopped because trying to be that counterintuitive is a pain.
Got 7-6-7 with the same tactic. Apparently the computer only looks at the last 4 throws, so as long as you're playing against Veteran (where your own rounds will be lost in the noise), it should be possible for a human to learn "anti-anti-patterns" and do better than chance.
14-11-14 over 39 rounds using gwern's linked prng (p=69, m=6, seed=minutes+seconds). Yet another cool trick to impress psychology professors!
I got 8-9-7 over 25 rounds (which seems approximately as good as chance) while trying to be smart (and not using any source of randomness). Edit: I guess this was actually 24 rounds.
19-18-13 over 50 rounds against the veteran, without using any external RNG, by looking away and thinking of something else so that I couldn't remember the results of previous rounds. (My after-lunch drowsiness probably helped.)
10-5-10 against veteran by trying to predict the computer and occasionally changing levels of recursion. Second try: 14-16-15 by trying to act randomly (without conciously using an algorithm).
9-6-10 here out of 25 rounds [], using current time. :( I remember doing way better than this a few months ago, just by playing naturally. Gonna blame sample size...
Somehow managed 16-8-5 versus the veteran computer, by using the articles own text as a seed "Computers mimic human reasoning by building on simple rules..." and applying a-h = rock, i-p = paper, q-z = scissors, I think this is the technique I will use against humans (I know a few people I would love to see flail against pseudo-randomness).
That should fail in the long run because it's unlikely that the frequency of letters in English divides so evenly that those rules make each choice converge to happening exactly 1/3 of the time. I'd just generate the random numbers in my head. A useful thing to do is to pick a couple of numbers from thin air (which doesn't work by itself because the human mind isn't good at picking 'random' numbers from thin air), then adding them together and then taking the last digit (or if you wantt 3 choices, taking them mod 3).
9-6-10 here out of 25 rounds [], using current time. :(
That'll be almost independent but not unbiased: I think that a-m will be more frequent than n-z. However, you could do the von Neumann trick: if you have an unfair coin and want a fair sequence of bits, take the first and second flips. HT is 0, TH is 1, and if you get HH or TT, check the third and fourth flips. Etc.
I just looked up the letter frequencies and it's 52% for a-m and 48% for n-z (for the initial letters of English words). Using 'l' instead of 'm' gives a 47/53 split, so 'm' is at least the best letter to use.
[Aside] When do you need to generate random numbers in your head? I can think of literally no time when I've needed to.
If you have to make a close decision and don't have a coin to flip. Or at a poker tournament if you don't trust your own ability to be unpredictable.
There once was some site that let you enter a sequence of “H” and “T” and test it for non-randomness (e.g. the distribution of the length of runs, the number of alternations, etc.), and after a couple attempts I managed to pass all or almost all the tests a few times in a row.

There once was a hare who mocked a passing tortoise for being slow. The erudite tortoise responded by challenging the hare to a race.

Built for speed, and with his pride on the line, the hare easily won - I mean, it wasn't even close - and resumed his mocking anew.

Winston Rowntree, Non-Bullshit Fables

I've always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.

8A1987dM10y []
On the meta-level, I'm not sure "quickness beats persistence" is a helpful lesson to teach. At the scale of things many LessWrongers would hope to help accomplish, both qualities are prerequisites, and it would be a mistake to believe that you don't have to worry about the latter just because you're one of the millions of people who are 99.9th percentile at the former. On the base level, a non-bullshit version of this fable would look more like "There once was a hare being passed by a tortoise. Neither of them could talk. The end."
Now that you mention it, a fable, by definition, requires bullshit.
"Moral: life is inarguably a depressingly unfair endeavor."
What's unfair about that quote? The faster one did win. This [] would exemplify your moral.
"Fairness" depends entirely on what you condition on. Conditional on the hare being better at racing, you could say it's fair that the hare wins. But why does the hare get to be better at racing in the first place? Debates about what is and isn't fair are best framed as debates over what to condition on, because that's where most of the disagreement lies. (As is the case here, I suppose).
The quote is the next line from the quote source.
Huh, okay.
On a similar note, there's [] - search for "Act Two".
0BillyOblivion10y [] Sorry, saw it earlier today and couldn't resist.

"The peril of arguing with you is forgetting to argue with myself. Don’t make me convince you: I don’t want to believe that much."

  • Even More Aphorisms and Ten-Second Essays from Vectors 3.0, James Richardson

The others are quite nice too:

That link is now broken. It turns out it was a highly incomplete excerpt from "Vectors 3.0" so I've put By the Numbers on Libgen and put up a complete version taken from the book. (I like some of the aphorisms, so I've ordered the other 2 books to scan as well.)

Jack Sparrow: [after Will draws his sword] Put it away, son. It's not worth you getting beat again.

Will Turner: You didn't beat me. You ignored the rules of engagement. In a fair fight, I'd kill you.

Jack Sparrow: Then that's not much incentive for me to fight fair, then, is it? [Jack turns the ship, hitting Will with the boom]

Jack Sparrow: Now as long as you're just hanging there, pay attention. The only rules that really matter are these: what a man can do and what a man can't do. For instance, you can accept that your father was a pirate and a good man or you can't. But pirate is in your blood, boy, so you'll have to square with that some day. And me, for example, I can let you drown, but I can't bring this ship into Tortuga all by me onesies, savvy? So, can you sail under the command of a pirate, or can you not?

--Pirates of the Caribbean

The pirate-specific stuff is a bit extraneous, but I've always thought this scene neatly captured the virtue of cold, calculating practicality. Not that "fairness" is never important to worry about, but when you're faced with a problem, do you care more about solving it, or arguing that your situation isn't fair? What can you do, and what can't you do? Reminds me of What do I want? What do I have? How can I best use the latter to get the former?

That said, if I recognize that I'm in a group that values "fairness" as an abstract virtue, then arguing that my situation isn't fair is often a useful way of solving my problem by recruiting alliances.

If you're in a group where "that's not fair" is frequently a winning argument, you may already be in trouble.

I am in many groups where, when choosing between two strategies A and B, fairness is one of the things we take into account. I'm not sure that's a problem.

If it's a frequently-occurring observation within the group then yes, there seems to be something wrong. Possibly because things are regularly proposed and acted on without considering fairness until someone has to point it out. If it hardly ever has to be said, but when pointed out, it is often persuasive, you're probably OK.

The pirate-specific stuff is a bit extraneous

Jack Sparrow: The only rules that really matter are these: what a [person] can do and what a [person] can't do. For instance, you can accept that [different customs from yours are traditional and commonly accepted in the world] or you can't. But [this thing you dislike] is [an inevitable feature of your human existence], boy, so you'll have to square with that some day ... So, can you [ally with somebody you find distasteful], or can you not?

Even more generally it can be taken as a paraphraasing of the Litany of Gendlin []
Frankly this is precisely the kind of ruthless pragmatism that gives utilitarians such a horrible reputation.

Well, it certainly didn't stop Jack Sparrow from being a beloved character.

You can be ruthless and popular, if you're sufficiently charismatic about it.

It also helps to be fictional, or at least sufficiently removed from the target audience that they perceive you in far mode.

I'd say that it's possible to be ruthless and popular even among people who're familiar with you, as long as you keep your ruthlessness in far mode for the people you're attempting to cultivate popularity amongst. Business executives come to mind, and the more cutthroat strains of social maneuverers.
Dunno mate, I could name a few US Presidents and non-US leaders.
Mmm, that's a good point. Potentially - If people know you're going to play according to a higher rule or purpose, rather than following feelings, then how much are they going to trust that you're really going to exercise that rule on their behalf? It'd be like the old argument that people should be allowed to kidnap people off the streets and take their organs - because when you average it out any individual is more likely to need an organ than be the one kidnapped so it's the better gamble for everyone to make. But we don't really imagine it that way, we all see ourselves being the ones dragged off the street and cut up, or that people with unpopular political opinions would be the ones... You can't trust someone who'd come up with that sort of system not to be playing a different game because they've already shown you can't trust their compassionate feelings to work as bounds on their actions. Maybe any friendship they express means as little to them as the poor guy they just butchered. I wonder how much of it is a trust problem though, and how you'd resolve that. It seems to me that if you knew someone really well, or they didn't seem to be grasping power, they could get away with being ruthless. People seem almost to gloat about how ruthless specops folks and the like are.
My impression is that whistle-blowers tend not to be trusted. It's not as though other businesses line up to hire them. I think the problem is having moral systems which impose high local costs.

More specifically, one thing I learned from Terry that I was not taught in school is the importance of bad proofs. I would say "I think this is true", work on it, see that there was no nice proof, and give up. Terry would say "Here's a criterion that eliminates most of the problem. Then in what's left, here's a worse one that handles most of the detritus. One or two more epicycles. At that point it comes down to fourteen cases, and I checked them." Yuck. But we would know it was true, and we would move on. (Usually these would get cleaned up a fair bit before publication.)

-Allen Knutson on collaborating with Terence Tao

At that point I'd start wondering why there doesn't appear to be a simple proof. For example, maybe some kind of generalization of the result is false and you need the complexity to "break the correspondence" with the generalization.
(meta) Saith the linked site: “You must sign in to read answers past the first one.” Well, that's obnoxious.
If it's any consolation, none of the answers past the first one on this question are very good.
Well, there are only 2
-Same place

A remarkable aspect of your mental life is that you are rarely stumped. True, you occasionally face a question such as 17 × 24 = ? to which no answer comes immediately to mind, but these dumbfounded moments are rare. The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it. Whether you state them or not, you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.

Daniel Kahneman,Thinking, Fast and Slow

As far as I can tell this doesn't agree with my experience; a good chunk of every day is spent in groping uncertainty and confusion.
Come and take my herb? []
Those moments send me into panic attacks. (At least when they're on significant topics not on maths).
Math is a significant topic!
*Topics where my inability to work out the answer immediately implies a lack of ability or puts me at risk.
Unless you took John Leslie []'s advice and Ankified the multiplication table up to 25.

I've read your link to John Leslie with both curiosity and bafflement.

17 x 24 is not perhaps the best example of a question for which no answer comes immediately to mind. Seventeen has the curious property that 17 x 6 = 102. (The recurring decimal 1/6 = 0.166666... hints to us that 17 x 6 = 102 is just the first of a series of near misses on a round number, 167 x 6 = 1002, 1667 x 6 = 10002, etc). So multiplying 17 by any small multiple of 6 is no harder than the two times table. In particular 17 x 24 = 17 x (6 x 4) = (17 x 6) x 4 = 102 x 4 = 408.

17 x 23 might have served better, were it not for the curious symmetry around the number 20, with 17 = 20 - 3 while 23 = 20 + 3. One is reminded of the identity (x + y)(x - y) = x^2 - y^2 which is often useful in arithmetic and tells us at once that 17 x 23 = 20 x 20 - 3 x 3 = 400 - 9 = 391.

17 x 25 has a different defect as an example, because one can hardly avoid apprehending 25 as one quarter of 100, which stimulates the observation that 17 = 16 + 1 and 16 is full of yummy fourness. 17 x 25 = (16 + 1) x 25 = (4 x 4 + 1) x 25 = 4 x 4 x 25 + 1 x 25 = 4 x 100 + 25 = 425.

17 x 26 is a better example. Nature has its little jokes. 7 x 3 = 21 the... (read more)

I'm not sure exactly what he had in mind, but learning the multiplication tables using Anki isn't exactly rote.

Now, this may not be the case for others, but when I see a new problem like 17 x 24, I don't just keep reading off the answer until I remember it when the note comes back around. Instead, I try to answer it using mental arithmetic, no matter how long it takes. I do this by breaking the problem into easier problems (perhaps by multiplying 17 x 20 and then adding that to 17 x 4). Sooner or later my brain will simply present the answers to the intermediate steps for me to add together and only much later do those steps fade away completely and the final answer is immediately retrievable.

Doing things this way, simply as a matter of course, you develop somewhat of a feel for how certain numbers multiply and develop a kind of "friendship with the integers." Er, at least, that's what it feels like from the inside.

That's not the important point. Even if you have, you will still face the same problem when facing a question like, for example, say 34 × 57 = ?. The quote was using that particular problem as an example. If that example does not apply to you because you Ankified the multiplication table up to 25 or for any other reason, it is trivial to find another problem that gives the desired mental response. (As I just did with the 34 × 57 problem.)
Agreed. I'm not so much disagreeing with the thrust of the quote as nitpicking in order to engage in propaganda for my favorite SRS.
Of course, even if I have no complete answer to 34 × 57, I still have "intuitive feelings and opinions" about it, and so do you. For example, I know it's between 100 and 10000 just by counting the digits, and although I've just now gone and formalized this intuition, it was there before the math: if I claimed that 34 × 57 = 218508 then I'm sure most people here would call me out long before doing the calculation.
What has this got to do with the original quote? The quote was claiming, truthfully or not, that when one is first presented with a certain type of problem, one is dumbfounded for a period of time. And of course the problem is solvable, and of course even without calculating it you can get a rough picture of the range the answer is in, and with a certain amount of practice one can avoid the dumbfoundedness altogether and move on to solving the problem, and that is a fine response to give to the original quote, but it has no relevance to what I was saying. All I was saying is that it is an invalid objection to object to the quote based on the fact that with a certain technique the specific example given by the quote can be avoided, as that example could have easily been replaced by a similar example which that technique does not solve. I was talking about that specific objection I was not saying the quote is perfect, or even that it is entirely right. You may raise these other objections to it. But the specific objection that Jayson_Virissimo raised happens to be entirely invalid.
I'm a little perplexed that I haven't got the multiplication table up to 25 memorized, given the number of times I've multiplied any two numbers under 25.
I'm curious - what advantage do you get from this?
So far, mostly the ability to perform entertaining parlor tricks (via mental arithmetic and a large body of facts about the countries of the world). I admit, it is not very impressive, but not useless either. In other words, nothing you couldn't do in a few minutes with a smartphone (although, I imagine, that would tend to ruin the "trick").

Don’t settle. Don’t finish crappy books. If you don’t like the menu, leave the restaurant. If you’re not on the right path, get off it.

--Chris Brogan on the Sunk Cost Fallacy

If there is another one next door, maybe. If it is much farther than that the menu would have to be fairly bad. ... if there is a sufficiently convenient alternative and the difference is significant.
I think you are using settle in its more precise meaning (i.e. release a legal claim), which is not consistent with the colloquial usage. Colloquially, "settle" is often used as the antonym of "take reasonable risks." Similarly, I think the difference between "don't like the menu" and "fairly bad" is hairsplitting for someone who would find this level and type of advice useful. In just about any city, the BATNA [] is "travel to another place to eat, getting no further from your home than you were at the first place." And that's a pretty good alternative. I think the quote correctly asserts that the alternative is underrated.
While I assert that the quote advocates premature optimization []. It distracts from actual cases of the sunk cost fallacy by warning against things that are often just are not worth fixing.

If knowledge can create problems, it is not through ignorance we can solve them.

-- Isaac Asimov

For some interesting exceptions to this quote, see Bostrom on Information Hazards [].

Within the philosophy of science, the view that new discoveries constitute a break with tradition was challenged by Polanyi, who argued that discoveries may be made by the sheer power of believing more strongly than anyone else in current theories, rather than going beyond the paradigm. For example, the theory of Brownian motion which Einstein produced in 1905, may be seen as a literal articulation of the kinetic theory of gases at the time. As Polanyi said:

Discoveries made by the surprising configuration of existing theories might in fact be likened to the feat of a Columbus whose genius lay in taking literally and as a guide to action that the earth was round, which his contemporaries held vaguely and as a mere matter for speculation.

― David Lamb & Susan M. Easton, Multiple Discovery: The pattern of scientific progress, pp. 100-101

Columbus's "genius" was using the largest estimate for the size of Eurasia and the smallest estimate for the size of the world to make the numbers say what he wanted them to. As normally happens with that sort of thing, he was dead wrong. But he got lucky and it turned out there was another continent there.

Wait... he did that on purpose?

Yes, actually. He believed the true dimensions of the Earth would conform to his interpretation of a particular Bible verse (thwo-thirds of the earth should be land, and one-third water, so the Ocean had to be smaller than believed) and fudged the numbers to fit.

Ah, OK. I had taken DanielLC to be implying that he had fudged the numbers in order to convince the Spanish queen to fund him.
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2. The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.

Perhaps Columbus's "genius" was simply to take action. I've noticed this in executives and higher-ranking military officers I've met-- they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn't, but the success rate is a lot better than for people who never take action at all.

Executives and higher ranking military officers also happen to have the power to enforce their decisions. Making decisions and acting on them can be possible without that power but the political skill required is far greater, the rewards lower, the risks of failure greater and the risks of success non-negligible.
This is how Scott Sumner describes his own work in macroeconomics and NGDP targetting. Others see it as radical and innovative, he thinks he is just taking the standard theories seriously.

BOSWELL. 'Sir Alexander Dick tells me, that he remembers having a thousand people in a year to dine at his house: that is, reckoning each person as one, each time that he dined there.' JOHNSON. 'That, Sir, is about three a day.' BOSWELL. 'How your statement lessens the idea.' JOHNSON. 'That, Sir, is the good of counting. It brings every thing to a certainty, which before floated in the mind indefinitely.'

From Boswell's Life of Johnson. HT to a commenter on the West Hunter blog.

If each person counts as one for each time he dines, Alexander can only claim to have personally hosted the guests at his most recent meal; the others were guests of someone else.

I think the idea is that all of the people are him.

quick math I used to dine with 1460 people a year in my home, reckoning each person as one each time I dined there. Families of four are mighty terrifying, aren't they?
Oooh. That explains a lot...

One test adults use is whether you still have the kid flake reflex. When you're a little kid and you're asked to do something hard, you can cry and say "I can't do it" and the adults will probably let you off. As a kid there's a magic button you can press by saying "I'm just a kid" that will get you out of most difficult situations. Whereas adults, by definition, are not allowed to flake. They still do, of course, but when they do they're ruthlessly pruned.

-Paul Graham

The way to deal with uncertainty is to analyze it into components. Most people who are reluctant to do something have about eight different reasons mixed together in their heads, and don't know themselves which are biggest. Some will be justified and some bogus, but unless you know the relative proportion of each, you don't know whether your overall uncertainty is mostly justified or mostly bogus.

--Paul Graham, same essay

Same essay.
Thank you!

If the climate skeptics want to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the academic climatology community, majorities of chemists and physicists, Nobel laureates, the IPCC, National Academies of Science, etc. with superior research and arguments.

-- Scott Aaronson on areas of expertise

If the atheists what to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the theology community, bishops, the Pope, pastors, denominational and non-denominational bodies, etc., with superior research and arguments.

To this, the skeptics might respond: but of course we can’t win over the mainstream scientific community, since they’re all in the grip of an evil left-wing conspiracy or delusion! Now, that response is precisely where “the buck stops” for me, and further discussion becomes useless. If I’m asked which of the following two groups is more likely to be in the grip of a delusion — (a) Senate Republicans, Freeman Dyson, and a certain excitable string-theory blogger, or (b) virtually every single expert in the relevant fields, and virtually every other chemist and physicist who I’ve ever respected or heard of — well then, it comes down to a judgment call, but I’m 100% comfortable with my judgment.

-- Scott Aaronson in the next paragraph

Not that I don't think this is a fair counterpoint to make, but in my own experience trying to find the best arguments for religion, I learned a lot more and got better reasoning talking to random laypeople than by asking priests and theologians.

Of course, the fact that I talked to a lot more laypeople than priests and theologians is most likely the determining factor here, but my experiences discussing the nature and details of climate change have not followed a similar pattern at all.

Just so I'm clear: do you believe the theology community ("bishops, the Pope, pastors, denominational and non-denominational bodies, etc.") is as reliable an authority on the nature and existence of the thing atheists don't believe in than the academic climatology community is on the nature and existence of the thing climate skeptics don't believe in? If so, then this makes perfect sense. That said, my experience with both groups doesn't justify such a belief.
The analogy doesn't cohere. Nobody denies that climate exists; they disagree on what it is doing.
I agree that nobody denies climate exists, but I think that's irrelevant to the question at hand. To clarify that a bit... Aaronson asserted a relationship between "climate skeptics" and "the academic climatology community" with respect to some concept X which climate skeptics deny exists. We could get into a whole discussion about what exactly X is (it certainly isn't climate), but rather than go down that road I simply referred to it as "the thing climate skeptics don't believe in." Eugine_Nier asserted a relationship between "atheists" and "the theology community" with respect to some concept Y which atheists deny exists. We could similarly get into a whole discussion about what exactly Y is, but rather than go down that road I simply referred to it as "the thing atheists don't believe in." If the theology community is in the same relationship to Y as the academic climatology community is to X, then the analogy holds. I just don't believe that the theology community is in that relationship to Y.
I believe Eugine_Nier is suggesting not that theology community is in the same relationship to Y as the academic climatology community is to X, but the reverse.
(nods) Yup. It's the opposite of what he said, but he could easily have been speaking ironically.
Well, no. You're an atheist. I'm sure a Christian climate skeptic would agree with you, with the terms reversed.
That is, a Christian climate skeptic would claim that their experience with both groups doesn't justify the belief that the academic climatology community is as reliable an authority as the theology community? In a trivial sense I agree with you, in that there's all sorts of tribal signaling effects going on, but not if I assume honest discussion. In my experience, strongly identified Christians believe that most theologians are unreliable authorities on the nature of God. Indeed, it would be hard for them to believe otherwise, since most theologians don't consider Jesus Christ to have been uniquely divine. Of course, if we implicitly restrict "the theology community" to "the Christian theology community," as many Americans seem to, then you're probably right for sufficiently narrow definitions of "Christian".
Hmm, interesting point. At a guess, I'd say there probably is more disagreement among theologians than climatologists, so there does seem to be some asymmetry there. On the other hand, if God is analogous to Global Warming (or whatever) then I suppose the analogy for those disputed details might be predictions of how soon we'll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is (including "there isn't one".) So there's that.
If "God" refers to what theologians and atheists disagree about, and "Global Warming" refers to what climatologists and climate skeptics disagree about, then sure. I'd be cautious of assuming we agree on what those labels properly refer to more broadly, though. Well, OK. Using that analogy, I guess I would say that if climatologists disagreed with each other about Global Warming as widely as theologians disagree with each other about God, I would not consider climatologists any more reliable a source of predictions of how soon we'll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is, than I consider theologists reliable as a source of predictions about God.
Yup. Hence the "or whatever". The point, of course, is that while they may disagree about the details, they all agree on the existence of the thing in question. Although TBH climatologists do seem to have more consensus than theologians.
It is not clear to me how to distinguish between "Christian, Buddhist, and Wiccan theologians agree on the existence of God but disagree on the details of God" and "Christian, Buddhist, and Wiccan theologians disagree on whether God exists" This is almost entirely due to a lack of clarity about what "God" refers to.
Well, Buddhist, and Wiccan theologians are in a minority compared to Christian, Hindu, Deist and so on. And there is a spectrum of both Wiccan and Buddhist thought ranging from standard atheism + relevant cosmology to pretty clear Theism of various kinds (plus relevant cosmology.) Still, it's probably more common than among climatologists, depending on how strictly we define "theologian". (And "climatologist" for that matter, there are a good few fringe "climatologists" who push climate skepticism.)
Yup, agreed that how we define the sets makes a big difference.
If atheists really thought that theists believed just because the pastors did, then targeting the pastors would seem to be the best way to go about it, yes. Either by attacking their credibility or attempting to convince them otherwise/attack the emotional basis of their faith. Even if the playing field was uneven and the pastors were actually crooked, there just wouldn't be any gain in going after the believers as individuals.
I can't think of a reply to this that won't start a game of reference class tennis; but I think there's a possibility that Aaronson's list is a more complete set of the relevant experts on the climate than your list is of the relevant experts on the existence of deities. If we grant the existence of deities, and merely wish to learn about their behavior; your list would be analogous to Aaronson's.
Both lists end with “etc.”, so I have trouble calling either of them incomplete.

I think "etc." is a request to the reader to be a good classifier--simply truncating the list at "etc." is overfitting, and defeats the purpose of the "etc." Contrariwise, construing "etc." to mean "everything else, everywhere" is trying to make do with fewer parameters than you actually need. The proper use of "etc." is to use the training examples to construct a good classifier, and flesh out members of the category by lazy evaluation as needed.

It's not a reasonable presumption that "etc." will cover "any arbitrary thing that happens to make trouble for your counterargument".
If nothing else at least we've got that covered.

Something a Chess Master told me as a child has stuck with me:

How did you get so good?

I've lost more games than you've ever played.

-- Robert Tanner

Dude, suckin' at something is the first step to being sorta good at something.

-- Jake the Dog (Adventure Time)

For reference purposes: video clip []; episode transcript [].
WTH... My latest Facebook status is “You got to lose to know how to win” (from “Dream On” by Aerosmith). o.O
Checkmate, atheists!
I don't get it...

Will is (non-seriously) pointing out that the synchronicity between army1987's Facebook status and Qiaochu's comment is too great to be explained by coincidence alone, and is thus strong evidence for the existence of God.

You've got to crash the car to know how to drive, got to drown to learn how to swim, you've got to believe to disbelieve. Got to !x to x.
But that would make it "checkmate, believers". All the other sentences say " you've got to to ".
X & !X can be anything, good or bad. You've just got to pick a value for X that fits in with your desires to get a particular outcome if you want to break it down in terms of good and bad. Got to live to die. The point is that the underlying structure of the argument remains the same whatever you pick. If you're actually interested in propositional logic, then the suitably named Logic by Paul Tomassi is a very approachable intro to this sort of thing. Though I'm afraid I couldn't say what it goes for these days.

How did you get so good?

I've lost more games than you've ever played.

Which is of course a different question to "What should I do to get good at Chess?" which is all about deliberate practice with a small proportion of time devoted to playing actual games.

Right, I often play blitz games for an hour a day weeks on end and don't improve at all. Interestingly, looking at professional games, even if I don't bother to calculate many lines, seems to make me slightly better; so there are ways to improve without deliberate practice, but playing blitz doesn't happen to be one of them. Playing standard time controls does work decently well though, at least once you can recognize all the dozen or so main tactics.
Playing a lot isn't as good as deliberate practice, but it's better than having done neither.
This seems incontrovertible.

By three methods we may learn wisdom: First, by reflection, which is noblest; second, by imitation, which is easiest; and third, by experience, which is the bitterest.

- K'ung Fu-tzu

The 'imitation' part is appropriately meta for a quote page.
I'd like to imagine that it's the blurb he put on the back of his own book: "I've done the reflection (noble!); buy now and you can get the benefit -- it's easy! -- or you can go stumbling off without the benefit of my wisdom like a sucker."

Amazon isn’t a store, not really. Not in any sense that we can regularly think about stores. It’s a strange pulsing network of potential goods, global supply chains, and alien associative algorithms with the skin of a store stretched over it, so we don’t lose our minds.

  • Tim Maly, pondering the increasing and poorly understood impact of algorithms on the average person's life.

Following the chain, I came across:

The motive of the algorithm is still unclear.

Source, with the addition later of 'expect to read a lot of sentences like this in coming years.'

The mere formulation of a problem is far more essential than its solution, which may be merely a matter of mathematical or experimental skills.

-- Albert Einstein

At least sometimes the formulation is far easier than the solution.

This is definitely true. General class of examples: almost any combinatorial problem ever. Concrete example: the Four Colour Theorem
Yes! Combinatorics problems are a perfect example of this. Trying to work out the probability of being dealt a particular hand in poker can be very difficult (for certain hands) until you correctly formulate the question- at which point the calculations are trivial : )
I think bentarm was offering "Combinatorics problems" as an example of the opposite of the phenomenon you describe. In particular the Four Colour Theorem is easy to formulate but hard to solve, and (as far as I know) the solution doesn't involve a reformulation.
Yes, upon re-reading I see that you are correct. I think there may be overlap between activities I consider part of the formulation and activities others may consider part of the solution. To expand on my poker suggestion. When attempting to determine the probability of a hand in poker it is necessary to determine a way to represent that hand using combinations/permutations. I have found that for certain hands this can be rather difficult as you often miss, exclude, or double count some amount of possible hands. This process of representing the hand using mathematics is, in my mind, part of the formulation of the problem; or more accurately, part of the precise formulation of the problem. In this respect, the solution is reduced to trivial calculations once the problem is properly formulated. However, I can certainly see how one might consider this to be part of the solution rather than the formulation. Thanks for pointing that out
In my experience it can often turn out that the formulation is more difficult than the solution (particularly for an interesting/novel problem). Many times I have found that it takes a good deal of effort to accurately define the problem and clearly identify the parameters, but once that has been accomplished the solution turns out to be comparatively simple.
Do you have an original source for that? All I can find is various quotation sites, which contain so amny other things that Einstein allegedly said I feel sceptical.
Nope, and I don't recall where I saw it attributed to him originally. (I did check by Googling it, but you're right that that only confirms that it's often attributed to him.)
Hmm. Einstein is perhaps most famous for "discovering" special relativity. But he neither formulated the problem, nor found the solution (I think the Lorentz transformation was already known to be the solution), but reinterpreted the solution as being real. His "greatest error" was introducing the cosmological constant into general relativity--curiously, making a similar error to what everyone else had made when confronted with the constancy of the speed of light, which was refusing to accept that the mathematical result described reality.
In writing a story, it's easy to identify problems with the story which you must struggle with for weeks to resolve. But often, you suddenly realize what the entire story is really about, and this makes everything suddenly easy. If by the formulation of the problem we mean that overall understanding, rather than specific obstacles, then yes. For stories.

When I was a Christian, and when I began this intense period of study which eventually led to my atheism, my goal, my one and only goal, was to find the best evidence and argument I could find that would lead people to the truth of Jesus Christ. That was a huge mistake. As a skeptic now, my goal is very similar - it just stops short. My goal is to find the best evidence and argument, period. Not the best evidence and argument that leads to a preconceived conclusion. The best evidence and argument, period, and go wherever the evidence leads.

--Matt Dillahunty

I wonder if somebody, looking at (a) his stated goal and (b) his behaviour, would consider his statement borne out. (Same goes for me, no offense to Dillahunty specifically).

Focusing is about saying no.

-- Steve Jobs

Longer version from here

People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas there are.

—Steve Jobs, interviewed in Fortune, March 7, 2008

Focusing is about saying no long enough to get into flow, or at least some kind of mental state where your short-term memory doesn't constantly evaporate. If you have to say no all the time, you'll wind up twenty hours later having written six lines and with a head full of jelly.
Without context I'm tempted to say focusing is about a whole bunch of things and that telling people to say no is just another way of saying, 'Use your willpower.' Which is another way of saying 'Focus by focusing!' Which... seems rather recursive at least.

One of the things that focusing is about is giving up pursuing good things.
Which means that if I want to focus, I need to decide which good things I'm going to say "no" to.
This may seem obvious, but after seeing many not-otherwise-stupid management structures create lists of "priorities" that encompass everything good (and consequently aren't priorities at all), I'm inclined to say that it isn't as obvious as it may seem.

Interesting take. ==== Or optimisation is going on at a different point in the company. Or it is as obvious as it seems and sanity isn't a property of management structures. Come to think of it it's not necessarily even a property of any individual who participated in the creation of that structure. An idiot who's read The Effective Executive and How to Win Friends and Influence people should be a darned effective manager - but they're not necessarily very intelligent. Similarly you can gradually converge on sane solutions without thinking anything through very far by applying fairly basic procedures, or even just being subject to selection pressures. ==== You need to decide which good things you're going to assign the most resources to, or in what order you're going to do them, or have a list of very general priorities that you're going to pass off to some other system in the company that will give you a similar sort of output. But however you do it, focusing isn't as simple as saying no - or even as saying no to the right things. You'll exclude some things by default but knowing when to say 'let's see' and how strongly to say yes is also very useful.
Yes, agreed.
This reminds me of Steven Covey's idea of a coordinate graph with four quadrants where you graph importance on on axis and urgency on the other. This gives you for types of "activities" to invest your time into. 1. Urgent and Unimportant (a phone ringing is a good example): this is where many people loose a tremendous amount of time 2. Urgent and Important (A broken bone or crime in progress) hese immediately demand our "focus" 3. Not Urgent and Not Important: pure time wasters- not a good place to invest much energy 4. Not Urgent BUT Important. This is the area Steven made a point of saying that most people fall short. Because these things are not urgent, we tend to put them off and not invest enough enough energy into them, but since they are important this means we pay a hefty price in the long run. Into this category he puts things like our health, important relationships, personal development and self improvement to name a few. When we choose what to focus our energy on, we would do well to direct as much of it as possible to these types of activites
Let us say you have a paper to write but you also want to go to a party. While trying to write the paper, you could keep wondering whether you should stop writing the paper and just go to the party, but keep writing anyway, i.e. try to use willpower. Or you could decide, once and for all that you are not going to go to the party, which is saying no. I think the second approach will be more effective in getting the paper done. So, I think there is actually a difference. Now, of course the insight isn't profound and both folk and professional psychology has known it for some time (I can't find a good link off-hand). But, when a successful person high-status person who has achieved a lot saying it lends it whole lot more of credibility.
I feel like it's more about saying "yes" with enthusiasm.

Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:

Pyne: I guess your long hair makes you a girl.

Zappa: I guess your wooden leg makes you a table.

Of course this would imply that Pyne is not a featherless biped.

Source: Robert Cialdini's Influence: The Psychology of Persuasion

"In the typical Western two men fight desperately for the possession of a gun that has been thrown to the ground: whoever reaches the weapon first shoots and lives; his adversary is shot and dies. In ordinary life, the struggle is not for guns but for words; whoever first defines the situation is the victor; his adversary, the victim. ... [the one] who first seizes the word imposes reality on the other; [the one] who defines thus dominates and lives; and [the one] who is defined is subjugated and may be killed."

"In the animal kingdom, the rule is, eat or be eaten; in the human kingdom, define or be defined."

-- Thomas Szasz




  • Radioactive stone in nest.
  • Use stone to seal off the air supply to a cage of birds.
  • Economist: Sell a precious stone (diamond? Ruby?). Use the proceeds to purchase several dozen chickens. The purchase produces an expected number of bird deaths equal to approximately the number of chickens purchased through tiny changes at the margins, making chicken farming and slaughter slightly more viable.
  • Omega: Use stone to kill the dog that would have killed the cat that will now kill 40 birds over its extended lifespan.

Punster: go on a hunting trip with Mick Jagger.

Double punster: it's hunting season for Jimmy Page's former band.
Nice, but how is this a rationality quote? Is there some allegory that I'm missing?
Um, be creative? 11 upvotes.

How rare it is to encounter advice about the future which begins from a premise of incomplete knowledge!

─James C. Scott, Seeing Like a State

"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I must run into."

"You only need to change your direction," said the cat, and ate it up.

-Kafka, A Little Fable

"You only need to change your direction," said the cat, and ate it up.

Moral: Just because the superior agent knows what is best for you and could give you flawless advice, doesn't mean it will not prefer to consume you for your component atoms!

My problem with this is, that like a number of Kafka's parables, the more I think about it, the less I understand it.

There is a mouse, and a mouse-trap, and a cat. The mouse is running towards the trap, he says, and the cat says that to avoid it, all he must do is change his direction and eats the mouse. What? Where did this cat come from? Is this cat chasing the mouse down the hallway? Well, if he is, then that's pretty darn awful advice, because if the cat is right behind the mouse, then turning to avoid the trap just means he's eaten by the cat, so either way he is doomed.

Actually, given Kafka's novels, so often characterized by double-binds and false dilemmas, maybe that's the point: that all choices lead to one's doom, and the cat's true observation hides the more important observation that the entire system is rigged.

('"Alas", said the voter, "at first in the primaries the options seemed so wide and so much change possible that I was glad there was an establishment candidate to turn to to moderate the others, but as time passed the Overton Window closed in and now there is the final voting booth into which I must walk and vote for the lesser of two evils."... (read more)

This is much better than my moral.
I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins. The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.
I briefly read the moral as something like this []; something along the lines of "being exposed in the open was the worst thing the mouse could imagine, so it ran blindly away from it without asking what the alternatives were". I'm still not sure I actually get it. Tangentially, keeping mouse traps in a house with a cat seems hazardous (though I could be underestimating cats). And I assume "day" and "chamber" are used abstractly.

If you will learn to work with the system, you can go as far as the system will support you ... By realizing you have to use the system and studying how to get the system to do your work, you learn how to adapt the system to your desires. Or you can fight it steadily, as a small undeclared war, for the whole of your life ... Very few of you have the ability to both reform the system and become a first-class scientist.

—Richard Hamming

(I recommend the whole talk, which contains some great examples and many other excellent points.)

I think the thing that strikes me most about this talk is how different science was then versus now. For one small example he was asked to comment on the relative effectiveness of giving talks, writing papers and writing books. In today's world its not a question anyone would ask, and the answer would be "write at least a few papers a year or you won't keep your job."

I don't see why it has to be either or.
Time and effort are zero-sum.
I don't think so. The status and resources that you get for being a first-class scientist will help you to fight the system.
And would even more help you continue being a first-class scientist, won't help you fight for free (no Time-Turners on offer, I'm afraid), and even in this scenario you still need to decide to first become a first-class scientist - since fighting the system is not a great path to getting status & resources.
Picking fights when you don't have any resources to fight them is in general no good strategy. Whenever you pick a fight you actually have to think about the price and possible reward. Craig Venter did oppose the NIH and then went and got private funding for himself to persue the ideas in a way he thought to be superior. Eliezer Yudkowsky did decide to operate outside acdemia. Peter Thiel funded him and the whole LessWrong enterprize increased the amount of resources that he has at his disposal. There are a lot of sources of resources, that can be gained by picking some fights.
Those aren't the kinds of fights Hamming is talking about. (You have read his talk, right?)
Sorry, now I read it and you are right Hamming does acklowedge that you can fight some fights but just recommends against wasting your time with fights that don't matter in the large scale of things.

There is nothing so disturbing to one's well-being and judgment as to see a friend get rich

Charles P Kindleberger, in Manias, Panics and Crashes; a History of Financial Crisis

I imagine that thanks to Bitcoin, a few of us can feel this quote acutely, in our guts.

One can be extremely confident when giving goal-based advice because it's always right. When you switch to giving instrument-based advice--when you switch from cheerleading to playing the game--you have to warn your audience that Your Mileage May Vary, that there's many a slip 'twixt cup and lip.

-- Garret Jones

Could you give an example for goal goal-based advice that's always right?
Sure. From the same post:

We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through the galaxy.

...What now matters most is that we avoid ending human history.

Parfit, On What Matters, Vol. 2 (pp. 616-620).

Parfit, quoted in ”How To Be Good” [] by Larissa MacFarquhar. PDF []

The iron rule of nature is: you get what you reward for. If you want ants to come, you put sugar on the floor.

--Charlie Munger

“You can catch more flies with honey than vinegar.” “You can catch even more with manure; what's your point?”

--Sheldon Cooper from The Big Bang Theory

“You can catch more flies with honey than vinegar.” “You can catch even more with manure; what's your point?”

That's actually an insightful analogy regarding human social politics.

The Stockholm syndrome says otherwise.
I gather one theory behind that is that captives associate an absence of punishment for the presence of kindness. i.e. they adjust for perceived reward - the reward being not getting intimidated/raped/whatever, at least not right then.
That link isn't clear to me. Could you please elaborate?
It's not "the iron rule", just one of many heuristics of limited applicability. Hurting instead of rewarding is often just as effective. And rewarding can also backfire in the worst way.
The Stockholm syndrome isn't only about hurting the hostage. The captor gains control of the enviroment in which the hostage lives and then can use that control to reward the hostage for fullfilling his wishes.
Munger's quote seemed to me like a more colorful rendition of "incentives matter," which is an iron rule (as it contrasts with what people often want to be true, which is "intentions matter"). Rewards backfiring is generally mistakenly applied rewards, like sugar on the floor, and punishments seem like they can be considered as anti-rewards; you don't get what you punish (with, again, the note that precision matters).

But I now thought that this end [one's happiness] was only to be attained by not making it the direct end. Those only are happy (I thought) who have their minds fixed on some object other than their own happiness[....] Aiming thus at something else, they find happiness along the way[....] Ask yourself whether you are happy, and you cease to be so.

-- John Stuart Mill, autobiography

For what it's worth, personal experience tells me otherwise.
I've found that thinking about something outside yourself (and thus not your own happiness) makes lots of people less depressed, and somewhat happy. However, the last sentence is clearly false, as many anecdotal reports of "I'm so happy!" show. Maybe it works that way for some people?

I came to the psychology of human misjudgment almost against my will; I rejected it until I realized that my attitude was costing me a lot of money, and reduced my ability to help everything I loved.

--Charlie Munger

But regardless of whether we believe our own positions are inviolable, it behooves us to know and understand the arguments of those who disagree. We should do this for two reasons. First, our inviolable position may be anything but. What we assume is true could be false. The only way we’ll discover this is to face up to evidence and arguments against our position. Because, as much as we may not enjoy it, discovering we’ve believed a falsehood means we’re now closer to believing the truth than we were before. And that’s something we should only ever feel gratitude for.

Aaron Ross Powell, Free Thoughts

This is why steelmanning is a really good community norm. Social incentives for understanding the other's position are usually bad, but if people give credit for steelmanning, these incentives are better.

"Steelmanning" and "understanding the other's position" aren't really related (to my knowledge).

It's difficult to steelman someone's position if I don't understand it.

Most of the propositions and questions to be found in philosophical works are not false but nonsensical. Consequently we cannot give any answer to questions of this kind, but can only point out that they are nonsensical. Most of the propositions and questions of philosophers arise from our failure to understand the logic of our language. [...] And it is not surprising that the deepest problems are in fact not problems at all.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 1921

Charles Darwin used to say that whenever he ran into something that contradicted a conclusion he cherished, he was obliged to write the new finding down within 30 minutes. Otherwise his mind would work to reject the discordant information, much as the body rejects transplants.

-- Warren Buffett

I have no idea whether this is true of Darwin, but it still might be good advice.

See here [].

The lack of a well-delineated hypothesis is not necessarily a barrier to acceptance of new directions in medical practice. The classic example is John Snow's demonstration that the 1854 cholera epidemic in London was attributable to contaminants in the water. When he removed the handle from the Broad Street pump, the number of cases in the area served by that pump promptly began to wane. Exactly what was in the water that caused the cholera would not be demonstrated for more than a quarter of a century. Still the results of Snow's intervention were so dra

... (read more)

If a statement is false, that's the worst thing you can say about it. You don't need to say it's heretical. And if it isn't false, it shouldn't be suppressed.

-Paul Graham

I like the sentiment, but Paul Graham seems to be claiming that information hazards don't exist, and that doesn't appear to be true.

Despite agreeing with the rest of the essay (which is very good), this is not true. Tiresomely standard counter-example: "Heil Hitler! No, there are no Jews in my attic."

I would say this is not ALWAYS true. But for the purpose of civilized discussion between human beings, it does seem like a very useful rule of thumb.

Substitute "statement" with "belief".
Sorry, I don't understand. I believe there are Jews in my attic, but this belief should be suppressed, rather than spread.
Fair enough.
This seems like fallacy of the excluded middle. Suppressed and spread are not the only two options.
If the nazi starts to believe it, you should suppress such a belief (probably by acting inocculuously, but if suppressing it violently would work better you should do that instead.)
That statement is bad for the nazis, who are now unable to achieve their desires. The statement is about instrumental badness, not universal moral badness. They're really quite different.
I like the sentiment. I disagree that it is (always) the worst you can say about it. And there are also true things that are actively constructed to be misleading---I certainly go about suppressing those where possible and plan to continue.
Wouldn't explaining why the statement is misleading be more productive than suppressing the misleading statement?

Like all great rationalists you believed in things that were twice as incredible as theology.

― Halldór Laxness, Under the Glacier.

...and then adjusted our senses of the 'incredible' accordingly, so that Special Relativity seemed less incredible, and God more so.

Sense of incredulity is not a belief, so it's not covered by those injunctions. A sense of wonder is both pleasant and good for mental health, and diverging to much from the average in deep emotional reactions carries a real cost in less accurate empathic modelling.
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn't probably have any particular problem with Einstenian relativity. On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn't depend to observer, it depends to reference frame... (or rather, doesn't depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it'll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer. Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don't seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don't respect a natural simplicity prior. If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
It can't do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn't include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don't know what would be the shortest way to encode a very good approximation to relativity either - could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I'm an atheist too, it's just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too. The issue is prior probability: since humans use a different prior than SI, it's not straightforward that SI will not favor shorter models that in practice may perform worse. There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical. For instance, there is a theorem [] which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity. If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm [] for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.

Before remembering the older definition of "incredible" that is presumably meant, I parsed this as "Like all great rationalists you believed in things that were twice as awesome as theology"; and thought "Only twice?".

What does this mean?

That on probabilistic or rational reflection one can come to believe intuitively implausible things that are as or more extraordinary than their theological counterparts. Or to mutilate Hamlet, that there are more things on earth than are dreamt of in heaven.

Most of quantum physics and relativity are certainly intuitively weirder than Jesus turning water into wine, self-replicating bread or a body of water splitting itself to create a passage.

I mean, our physics say it's technically possible to make machines that do all of this. Without magic. Using energy collected in space and sent to Earth using beams of light. Although we probably wouldn't use beams of light because that's inefficient.

I am confused--upvoting this comment is a rejection of this website.
I doubt that Laxness means "rationalist" in the LW community sense. In philosophy, a rationalist is defined as distinct from an empiricist, as one who believes knowledge to be arrived at from a priori cogitation, as opposed to experience.
Even after looking the book up on Google, without context, I can't tell whether the rationalist being spoken of has gone astray through his reason [], or has succeeded [] in finding the truth of something. But I am now interested in reading Laxness.
The mere size of the universe is pretty incredible. I don't think it gets as much emphasis as it used to. I'm not sure whether people have quit thinking about it or gotten used to it.

Scott Adams on evolution toward... what?

I see the iWatch as the next phase in our evolution to full cyborg status. I want my Google glasses, iWatch, smartphone, and anything else you want to attach to my body. Frankly, I'm tired of being nothing but a skin-bag full of decaying organs. I want to be the machine I was always meant to be. That prospect excites me.

As she stared at her wall, she understood that she would have to deal with it, accept it all as a new part of her existence. That was the only reasonable thing to do. She didn't have to be happy about it, but the universe wasn't structured around her happiness.

-To The Stars

If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties.

--Francis Bacon

Neither is necessarily or even usually true though, is it?
Necessarily, of course not. Usually, well, this is Francis Bacon [], and so the intended meaning of the quote is more like "We can be more certain in the outputs of empiricism than we can be in the outputs of deductive argument beginning with intuitions or other a priori knowledge."

'Talking of a Court-martial that was sitting upon a very momentous publick occasion, he expressed much doubt of an enlightened decision; and said, that perhaps there was not a member of it, who in the whole course of his life, had ever spent an hour by himself in balancing probabilities.'

Boswell's Life of Johnson (quoted in "Applied Scientific Inference", Sturrock 1994)

All things be ready if our minds be so.

  • William Shakespeare, Henry V
What does this mean?
In context, this is said right before the battle of Agincourt and Henry V is reminding his troops that the only thing left for them to do is to prepare their minds for the coming battle (where they are horribly outnumbered). I guess the rationality part is to remember that sometimes we must make sure to be in the right mindset to succeed. I've always seen that whole speech as a pretty good example of reasoning from the wrong premises: Henry V makes the argument that God will decide the outcome of the battle and so if given the opportunity to have more Englishmen fighting along side them, he would choose to fight without them since then he gets more glory for winning a harder fight and if they lose then fewer will have died. Of course he doesn't take this to the logical conclusion and go out and fight alone, but I guess Shakespeare couldn't have pushed history quite that far. A good 'dark arts' quote from that speech might be when he offers to pay anyone's fare back to England if they leave then. After that, anyone thinking of deserting will be trapped by their sunk costs into staying - but maybe that's not what Shakespeare had in mind...

The quote struck me as a poetic way of affirming the general importance of metacognition - a reminder that we are at the center of everything we do, and therefore investing in self improvement is an investment with a multiplier effect. I admit though this may be adding my own meaning that doesn't exist in the quote's context.

I've always seen that whole speech as a pretty good example of reasoning from the wrong premises: Henry V makes the argument that God will decide the outcome of the battle and so if given the opportunity to have more Englishmen fighting along side them, he would choose to fight without them since then he gets more glory for winning a harder fight and if they lose then fewer will have died. Of course he doesn't take this to the logical conclusion and go out and fight alone, but I guess Shakespeare couldn't have pushed history quite that far.

Rewatching Branagh's version recently, I keyed in on a different aspect. In his speech, Henry describes in detail all the glory and status the survivors of the battle will enjoy for the rest of their lives, while (of course) totally downplaying the fact that few of them can expect to collect on that reward. He's making a ... (read more)

Here I was thinking command came to you naturally.

This anxiety attack seems natural enough. Now let's fix it with science.

Howard Taylor - Schlock Mercenary

This DOES teach me a lesson that coming up with absurd-sounding situations on the spot to demonstrate something’s “self-evident implausibility” is liable to come back to bite me, though. I should do it more often, just to accidentally stumble across out of the box ideas.


Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.

--Daniel Kahneman on the dichotomy between the self that experiences things from moment to moment and the self that remembers and evaluates experiences as a whole. (from Thinking, Fast and Slow )

These studies are the record of a failure-- the failure of facts to sustain a preconceived theory. The facts assembled, however, seemed worthy of further examination. If they would not prove what we had hoped to have them prove, it seemed desirable to turn them loose and to follow them to whatever end they might lead.

Edgar Lawrence Smith, Common Stocks as Long Term Investments

"Never forget I am not this silver body, Mahrai. I am not an animal brain, I am not even some attempt to produce an Al through software running on a computer. I am a Culture Mind. We are close to gods, and on the far side."

-Iain M. Banks, Look to Windward

Incidentally, Mr. Banks has been diagnosed with terminal cancer, and estimated to have a few months to live as of this post. Comments may be made on his website: []

Whoops, forgot to promote this.

Some people want it to happen, some wish it would happen, others make it happen.

"Michael Jordan"

The significant problems we face cannot be solved at the same level of thinking we were at when we created them.

-- Albert Einstein

Source? Wikiquote [] seems to think its a misquote.
Isn't there a law or something stating that Einstein never said 99% of what's attributed to him? Or maybe that the accuracy of quote's attribution is inversely proportional to the person's fame?
Well, it's unsurprising that misattributed quotes are more often attributed to famous people than to unknown people.
Thanks FiftyTwo- I just looked up the article you refer to and it indicates that it may be a paraphrase of a longer quote. I heard this from Anthony Robbins, this quote is attributed to Einstein in some of his literature. It seems that the sentiment, if not the exact quote, seem to be attributable to Einstein

New to LessWrong?