Wiki Contributions


The Upward Scaling Importance of Rationality

Kindness will only affect decisions where altruistic behavior wouldn't occur if lacking kindness. Integrity I'm even less sure about. Rationality could affect any decision where bias or fuzzy reasoning is involved, which is almost every decision.

Are there good reasons to get into a PHD (i.e. in Philosophy)? And what to optimize for in such case?

You should get a Ph.D. in Philosophy if you consider the material studied in philosophy to be an end in itself. Philosophy is a truthseeking discipline, so if you find that inherently rewarding and could imagine doing that for a large part of your life it's a good decision. Don't worry about the wariness of philosophy: I can guarantee you that the criticisms levied here against philosophy have been addressed tenfold in actual philosophy departments, by people with sympathies closer to Luke's than you'd think.

That said, a lot of people go into graduate programs for bad reasons. Here are two I've been tempted by:


Minimizing Status Risk. A lot of people think about risk in terms of financial gain or loss, but few think about risk in terms of status when it's a real concern for many people. Graduating college can be intimidating, especially if you're at a prestigious college, because you're about to be stripped of your hierarchical standing among people your age. If you've attended, say, Harvard for four years, you've spent those four years thinking of yourself on the top of the food chain relative to other college students.

Once you're out of college, this is no longer true, and you're measured by what kind of job you have. It's extremely tempting to avoid this by applying to graduate school, because graduate school allows you to continue the imagined hierarchical standing that you've had for the past few years. Eventually you'll get a Ph.D. and be on top of the intellectual food chain. This has nothing to do with "avoiding the real world", because "the real world" as an employment area is conspicuously centered on office jobs or whatever the majority of people happen to do for money. (I wonder if farmers consider everyone else to have a "fake" job. Probably.)

It's a way of avoiding vulnerability to your status, because working as a clerk or receptionist or barista or server or whatever after college is generally not prestigious and makes you feel like your intellect isn't worth anything. That's an uncomfortable feeling, sure, but make sure you're not eyeing a Ph.D. just to avoid that feeling.


Even if you're not avoiding Status Risk, make sure you're not getting a Ph.D. just to feel like an intellectual hotshot anyway. A lot of people reason about competence in binary ways (expert or non-expert) even though competence obviously exists on a spectrum, so it's tempting to get a title that lends you immediately to the "expert" end of any discussion. That way, you can throw your weight around whenever there's a clash of words.

With philosophy especially, it's enigmatic to a lot of people. There's a mystery of what you're actually learning in an advanced program. So a Ph.D. looks like a "certified smart person" badge to a lot of people, and that's tempting. Make sure you're not getting it for that reason either.

Here's the litmus test. Ask yourself: "would I self-study this material anyway if I had the next three-five years paid for? Would this occupy a large part of my time regardless of what I'm doing?" If so, it's worth it.

The Wrongness Iceberg

Sure, in the very short run (starting from absolutely no knowledge of the game) you'd have to make mistakes to learn anything at all. But the process of getting better is a gradual decrease of the frequency of those mistakes. You'd want to minimize your mistakes as much as possible as you got better, because the frequency of mistakes will be strongly correlated with how much you lose.

I think you're seeing "try to minimize how many mistakes you make" and reading that as "trying to make no mistakes." There are certainly mistakes you'll have to make to get better, but then there are superfluous mistakes that some people may make while others won't, or catastrophic mistakes that would make you look really bad which you'd definitely want to avoid. The depth of mistakes can go much deeper than the necessary mistakes you'd have to make to get better, in other words.

How to offend a rationalist (who hasn't thought about it yet): a life lesson

I really liked this post, and I think a lot of people aren't giving you enough credit. I've felt similarly before -- not to the point of suicide, and I think you might want to find someone who you can confide those anxieties with -- but about being angered at someone's dismissal of rationalist methodology. Because ultimately, it's the methodology which makes someone a rationalist, not necessarily a set of beliefs. The categorizing of emotions as in opposition to logic for example is a feature I've been frustrated with for quite some time, because emotions aren't anti-logical so much as they are alogical. (In my personal life, I'm an archetype of someone who gets emotional about logical issues.)

What I suspect was going on is that you felt that this person was being dismissive of the methodology and that the person did not believe reason to be an arbiter of disagreements. This reads to me like saying "I'm not truth-seeking, and I think my gut perception of reality is more important than the truth" -- a reading that sounds to me both arrogant and immoral. I've ran across people like this too, and every time I feel like someone is de-prioritizing the truth over their kneejerk reaction, it's extremely insulting. Perhaps that's what you felt?

The Wrongness Iceberg

I don't currently work at a restaurant, so at the moment I'm afraid of nothing.

But for the purposes of the example, it's not about discovering mistakes or incompetence -- it's about your level of incompetence being much greater than you previously estimated, for reasons you were unaware of prior to being exposed to those reasons.

The Wrongness Iceberg

I find that similar to the concept of fractal wrongness. What distinguishes an iceberg from a fractal is that you're in situations where someone is resisting exposing the whole iceberg for one reason or another. In the dishonesty scenario, you realize one lie reveals many others but only because that person has left you a tidbit of information that cracks their facade and allows you to infer just how deeply they've lied to you -- or in the case of attraction, an event or action that only would occur if they had a much greater level of attraction existing below the surface.

Beware Trivial Inconveniences

I think LessWrong actually has a higher barrier for contribution -- at least for articles -- because you're expected to have 20 comment karma before you can submit. This means that, if you're honest anyway, you'll have to spend your time in the pit interacting with people who could potentially shout you down, or call you a threat to their well-kept garden, or whatever.

I have at least 3 articles in draft format that I want to submit once I reach that total, but I don't comment on discussions as much because most of what I would say is usually said in one comment or another. For people like me, the barrier of "must email someone" is actually easier, since discussion contribution requires a sense of knowing how the community works, intuiting a sense of what the community deems a good comment, and posting along those lines.

Train Philosophers with Pearl and Kahneman, not Plato and Kant

Luke, I was curious: where does informal logic fit into this? It is the principal method of reasoning tested on the LSAT's logical reasoning section, and I would say the most practical form of reasoning one can engage in, since most everyday arguments will utilize informal logic in one way or another. Honing it is valuable, and the LSAT percentiles would suggest that not nearly as many people are as good at it as they should be.

Train Philosophers with Pearl and Kahneman, not Plato and Kant

His understanding of philosophy is barely up to undergraduate level. Sorry, but that's the way it is.

I feel like the phrasing "barely up to undergraduate level" is like saying something is "basic" or "textbook" not when it's actually basic or textbook but because it insinuates there is an ocean of knowledge that your opponent has yet to cross. If luke is "barely undergraduate" then I know a lot of philosophy undergrads who might as well not call themselves that.

While I agree that reform is far more likely to be done by a Dewey or Erasmus, your reasoning gives me a very "you must be accepted into our system if you want to criticize it" vibe.

2012 Survey Results

The general population would contain 50 sociopaths to 1000; I don't think LessWrong contains 50 sociopaths to 1000. Rationality is a truth-seeking activity at its core, and I suspect a community of rationalists would do their best to avoid lying consciously.

I am not sure what "the common definition of the word 'lie'" is, especially since there are a lot of differing interpretations of what it means to lie. I know that wrong answers are distinct from lies, however. I think that a lot of LessWrong people might have put an IQ that does not reflect an accurate result. But I doubt that many LessWrong people have put a deliberately inaccurate result for IQ. Barring "the common definition" (I don't know what that is), I'm defining "stating something when you know what you are stating is false" as a lie, since someone can put a number when they don't know for sure what the true number is but don't know that the number they are stating is false either.

I don't know what you mean by "mean something" with respect to Mensa Denmark's normings. They will probably be less accurate than a professional IQ testing service, but I don't know why they would be inaccurate or "meaningless" by virtue of their organization not being a professional IQ testing service.

The only way I can think of in which the self-reported numbers would be more accurate than the numbers is if the LW respondents knew that their IQ numbers were from a professional testing service and they had gone to this service recently. But since the test didn't specify how they obtained this self-report, I can't say, nor do I think it's likely. uses Raven's Progressive Matrices which is a standard way to measure IQ across cultures. This is because IQ splits between verbal/spatial are not as common. It wouldn't discriminate against autistics, because it actually discriminates in favor of autistics; people with disorders on the autism spectrum are likely to score higher, not lower.

I'm not sure how the bolding of "in that way" bolsters your argument. Paraphrased, it would be "in the way that the user types the IQ score into the survey box themselves, the questions are equally flawed to the other intelligence questions." But this neglects to consider that the source of the number is different; they are self-reports in the sense that the number is up to someone to recall, but if someone types in their number you know it came from If someone types in their IQ without specifying the source, you have no idea where they got that number from -- they could be estimating, it could be a childhood test score, and so on.

Please consider getting some rationality training or something.

Remarks like these are unnecessary, especially since I've just joined the site.

Load More