The real heart of the conjunction fallacy is mistaking P(A|B) and P(B|A). Since those look very similar, let's try to make them more distinct: P(description|attribute) and P(attribute|description), or representativeness and likeliness*.

When you hear "NBA player," the representativeness for 'tall and athletic' skyrockets. If he was an NBA player, it's almost certain that he's tall and athletic. But the reverse inference- how much knowing that he's tall and athletic increases the chance that he's an NBA player- is much lower. And while the bank teller detail is strange, you probably aren't likely to adjust the representativeness down much because of it, even though there are probably more former NBA players who are short or got fat after leaving the league than there are former NBA players that became bank tellers. (That is, you should pay as much attention to 1% probabilities as you should to 99% probabilities when doing Bayesian calculations, because both represent similar strengths of evidence.)

When details increase, the likeliness of a story has to not increase, assuming you're logically omniscient, which is obviously a bad assumption. If I say that I'm wearing green, and then that I'm wearing blue, it's more likely that I'm wearing just green than wearing green and blue, because any case in which I am wearing both I am wearing green. This is the core idea of burdensome details.

So lets talk examples. When an insurance salesman comes to your door, which question will he ask: "what's the chance that you'll die tomorrow and leave your loved ones without anyone to care for them?" or "what's the chance that you'll die tomorrow of a heart attack and leave your loved ones without anyone to care for them?" The second question tells a story- and if your estimate of dying is higher because they specified the cause of death (which necessarily leaves out other potential causes!), then by telling you a long list of potential causes, as well as many vivid details about the scenario, the salesman can get your perceived risk as high as he needs it to be to justify the insurance.

Now, you may make the omniscience counterargument from before- who is to say that your baseline is any good? Maybe you thought the risk was zero, but on second thought it's actually nonzero. But I would argue that the way to fix a fault is by doing the right thing, not a different wrong thing. You say "Wow, that is scary. But what's the actual risk, in numeric terms?", because if you don't trust yourself to estimate what your total risk of death is, then you probably shouldn't trust yourself to estimate your partial risk of death.

*I use infrequently used terms to try to make it clear that I am referring to precisely defined mathematical entities.

But when you go around using it to mercilessly pursue rationality with no regard for decorum, you end up doing poorly in real life.

Agreed that it's a good idea to be polite. Disagreed that the conjunction fallacy is just because people are polite. There are lots of experiments where people are just getting the formal math problem wrong or being primed into giving strange estimates.

But even if we suppose that the person is trying to 'steelman the question,' that is a dangerous thing to do in real life. "Did you get the tickets for Saturday?" She must mean Friday, because that's when we're going. "Yes, I got the tickets." Friday: "I'm outside the theater, where are you?" "At work; we're going tomorrow! got the tickets for tomorrow, right? Because now the show is sold out."

you'd lose in the social exchange because you would have acted like a weirdo.

Yes, it's a good social skill to judge the level of precision the other person wants in the conversation. Responding to an unimportant anecdote with a "well actually" is generally seen as a jerk move. But if you're around people who see it as a jerk move to insist on precision when something meaningful actually depends on that precision, then you need to replace those people.

And if they were intentionally asking you a gotcha, and you skewer the gotcha, that's a win for you and a loss for them.

But if you're around people who see it as a jerk move to insist on precision when something meaningful actually depends on that precision, then you need to replace those people.

Huh? First, Linda's occupation in the original example is trivial, since I don't know Linda and could not care less about what she does for a living.

And "replacing" people is not how life works. To be successful, you'll need navigate (without replacing) all types of folks.

And if they were intentionally asking you a gotcha, and you skewer the gotcha, that's a win for y

... (read more)

[Meta] The Decline of Discussion: Now With Charts!

by Gavin 2 min read4th Jun 2014105 comments


[Based on Alexandros's excellent dataset.]

I haven't done any statistical analysis, but looking at the charts I'm not sure it's necessary. The discussion section of LessWrong has been steadily declining in participation. My fairly messy spreadsheet is available if you want to check the data or do additional analysis.

Enough talk, you're here for the pretty pictures.

The number of posts has been steadily declining since 2011, though the trend over the last year is less clear. Note that I have excluded all posts with 0 or negative Karma from the dataset.


The total Karma given out each month has similarly been in decline.

Is it possible that there have been fewer posts, but of a higher quality?

No, at least under initial analysis the average Karma seems fairly steady. My prior here is that we're just seeing less visitors overall, which leads to fewer votes being distributed among fewer posts for the same average value. I would have expected the average karma to drop more than it did--to me that means that participation has dropped more steeply than mere visitation. Looking at the point values of the top posts would be helpful here, but I haven't done that analysis yet.

These are very disturbing to me, as someone who has found LessWrong both useful and enjoyable over the past few years. It raises several questions:


  1. What should the purpose of this site be? Is it supposed to be building a movement or filtering down the best knowledge?
  2. How can we encourage more participation?
  3. What are the costs of various means of encouraging participation--more arguing, more mindkilling, more repetition, more off-topic threads, etc?


Here are a few strategies that come to mind:

Idea A: Accept that LessWrong has fulfilled its purpose and should be left to fade away, or allowed to serve as a meetup coordinator and repository of the highest quality articles. My suspicion is that without strong new content and an online community, the strength of the individual meetup communities may wane as fewer new people join them. This is less of an issue for established communities like Berkeley and New York, but more marginal ones may disappear.

Idea B: Allow and encourage submission of rationalism, artificial intelligence, transhumanism etc related articles from elsewhere, possibly as a separate category. This is how a site like Hacker News stays high engagement, even though many of the discussions are endless loops of the same discussion. It can be annoying for the old-timers, but new generations may need to discover things for themselves. Sometimes "put it all in one big FAQ" isn't the most efficient method of teaching.

Idea C: Allow and encourage posts on "political" topics in Discussion (but probably NOT Main). The dangers here might be mitigated by a ban on discussion of current politicians, governments, and issues. "Historians need to have had a decade to mull it over before you're allowed to introduce it as evidence" could be a good heuristic. Another option would be a ban on specific topics that cause the worst mindkilling. Obviously this is overall a dangerous road.

Idea D: Get rid of Open Threads and create a new norm that a discussion post as short as a couple sentences is acceptable. Open threads get stagnant within a day or two, and are harder to navigate than the discussion page. Moving discussion from the Open Threads to the Discussion section would increase participation if users could be convinced thatit was okay to post questions and partly-formed ideas there.

The challenge with any of these ideas is that they will require strong moderation. 

At any rate, this data is enough to convince me that some sort of change is going to be needed in order to put the community on a growth trajectory. That is not necessarily the goal, but at its core LessWrong seems like it has the potential to be a powerful tool for the spreading of rational thought. We just need to figure out how to get it started into its next evolution.