When you hear a hypothesis that is completely new to you, and seems important enough that you want to dismiss it with "but somebody would have noticed!", beware this temptation.  If you're hearing it, somebody noticed.

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar.  I am not trying to "soften you up" for a future surprise with this post.

1. Wednesday

Suppose: Wednesday gets to be about eighteen, and goes on a trip to visit her Auntie Alicorn, who has hitherto refrained from bringing up religion around her out of respect for her parents1.  During the visit, Sunday rolls around, and Wednesday observes that Alicorn is (a) wearing pants, not a skirt or a dress - unsuitable church attire! and (b) does not appear to be making any move to go to church at all, while (c) not being sick or otherwise having a very good excuse to skip church.  Wednesday inquires as to why this is so, fearing she'll find that beloved Auntie has been excommunicated or something (gasp!  horror!).

Auntie Alicorn says, "Well, I never told you this because your parents asked me not to when you were a child, but I suppose now it's time you knew.  I'm an atheist, and I don't believe God exists, so I don't generally go to church."

And Wednesday says, "Don't be silly.  If God didn't exist, don't you think somebody would have noticed?"

2. Ignoring Soothsayers

Wednesday's environment reinforces the idea that God exists relentlessly.  Everyone she commonly associates with believes it; people who don't, and insist on telling her, are quickly shepherded out of her life.  Because Wednesday is not the protagonist of a fantasy novel, people who are laughed out of public discourse for shouting unpopular, outlandish, silly ideas rarely turn out to have plot significance later: it simply doesn't matter what that weirdo was yelling, because it was wrong and everybody knows it.  It was only one person.  More than one person would have noticed if something that weird were true.  Or maybe it was only six or twelve people.  At any rate, it wasn't enough.  How many would be enough?  Well, uh, more than that.

But even if you airdropped Wednesday into an entire convention center full of atheists, you would find that you cannot outnumber her home team.  We have lots of mechanisms for discounting collections of outgroup-people who believe weird things; they're "cultists" or "conspiracy theorists" or maybe just pulling a really overdone joke.  There is nothing you can do that makes "God doesn't exist, and virtually everyone I care about is terribly, terribly wrong about something of immense importance" sound like a less weird hypothesis than "these people are silly!  Don't they realize that if God didn't exist, somebody would have noticed?"

To Wednesday, even Auntie Alicorn is not "somebody".  "Somebody" is "somebody from whom I am already accustomed to learning deep and surprising facts about the world".  Maybe not even them.

3. Standing By

Suppose: It's 1964 and you live in Kew Gardens, Queens.  You've just gotten back from a nice vacation and when you get back, you find you forgot to stop the newspapers.  One of them has a weird headline.  While you were gone, a woman was stabbed to death in plain view of several of your neighbors.  The paper says thirty-eight people saw it happen and not a one called the police.  "But that's weird," you mutter to yourself.  "Wouldn't someone have done something?"  In this case, you'd have been right; the paper that covered Kitty Genovese exaggerated the extent to which unhelpful neighbors contributed to her death.  Someone did do something.  But what they didn't do was successfully get law enforcement on the scene in time to save her.  Moving people to action is hard.  Some have the talent for it, which is why things like protests and grassroots movements happen; but the leaders of those types of things self-select for skill at inspiring others to action.  You don't hear about the ones who try it and don't have the necessary mojo.  Cops are supposed to be easier to move to action than ordinary folks; but if you sound like you might be wasting their time, or if the way you describe the crime doesn't make it sound like an emergency, they might not turn up for a while.

Events that need someone to act on them do not select for such people.  Witnesses to crimes, collectors of useful evidence, holders of interesting little-known knowledge - these are not necessarily the people who have the power to get your attention, and having eyewitness status or handy data or mysterious secrets doesn't give them that power by itself.  If that guy who thinks he was abducted by aliens really had been abducted by aliens, would enough about him be different that you'd sit still and listen to his story?

And many people even know this.  It's the entire premise of the "Bill Murray story", in which Bill Murray does something outlandish and then says to his witness-slash-victim, "No one will ever believe you."  And no one ever will.  Bill Murray could do any fool thing he wanted to you, now that this meme exists, and no one would ever believe you.

4. What Are You Going To Do About It?

If something huge and unbelievable happened to you - you're abducted by aliens, you witness a key bit of a huge crime, you find a cryptozoological creature - and you weren't really good at getting attention or collecting allies, what would you do about it?  If there are fellow witnesses, and they all think it's unbelievable too, you can't organize a coalition to tell a consistent tale - no one will throw in with you.  It'll make them look like conspiracy theorists.  If there aren't fellow witnesses, you're in even worse shape, because then even by accumulating sympathetic ears you can't prove to others that they should come forward with their perspectives on the event.  If you try to tell people anyway, whatever interest from others you start with will gradually drain away as you stick to your story: "Yeah, yeah, the first time you told me this it was funny, but it's getting really old, why don't we play cards or something instead?"  And later, if you keep going: "I told you to shut up.  Look, either you're taking this joke way too far or you are literally insane.  How am I supposed to believe anything you say now?"

If you push it, your friends think you're a liar, strangers on the street think you're a nutcase, the Internet thinks you're a troll, and you think you're never going to get anyone to talk to you like a person until you pretend you were only fooling, you made it up, it didn't happen...  If you have physical evidence, you still need to get people to look at it and let you explain it.  If you have fellow witnesses to back you up, you still need to get people to let you introduce them.  And if you get your entire explanation out, someone will still say:

"But somebody would have noticed."


1They-who-will-be-Wednesday's-parents have made no such demand, although it seems possible that they will upon Wednesday actually coming to exist (she still doesn't).  I am undecided about how to react to it if they do.

New Comment
258 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Like this old joke. Two economists are walking down the street.

Look! There is an 100 bill on the floor!

No it isn't. Somebody would noticed it before and picked it up!


"Somebody would have noticed" is shorthand for a certain argument. Like most shorthand arguments, it can be used well or badly. Using a shorthand argument badly is what we mean by a "fallacy".

A shorthand argument is used well, in my opinion, just if you could expand it to the longhand form and it would still work. That's not a requirement to always do the full expansion. You don't have to expand it each time, nor have 100% confidence of success, nor expand the whole thing if it's long or boring. But expanding it has to be a real option.

Critical questions that arise in expanding this particular argument:

  • What constitutes noticing?

    • Would other people who noticed understand what they saw?
    • Further, would they understand it the same way that we do?
      • How much potential is there for their understanding of the same phenomenon to be quite different from ours?
    • Further, if their understanding is similar to ours, would they express it in terms that we would recognize?
      • This could include actions that we recognize as relating to the phenomenon.
  • Would we know that they noticed?

    • Motivations: Would people who noticed have strong motivations for letting us know or for
... (read more)
I think this point about shorthand arguments and their expansion on demand is very helpful. I'd love to see a top-level post on it, with one or two additional examples.
The first two paragraphs of your comment made something click for me. Thanks.

While I don't think that "someone would have noticed" is always a fallacy, I do think that we humans tend to underestimate the chance of some obvious fact going unnoticed by a large group for a prolonged period.

At a computer vision conference last year, the best paper award went to some researchers that discovered an astonishing yet simple statistic of natural images, which surprised me at first because I thought all the simple, low level, easily accessible discoveries in computer vision had long since been discovered.

A different example- one of the most successful techniques in computer vision of the past decade has been graph cuts, where you formulate an optimization problem as a max flow problem in a graph. The first paper on graph cuts was published in 1991 iirc, but it was ignored and it wasn't until 2000 that people went back to it, whereupon several of the field's key problems were immediately solved!


Agreed - consider C60. Would anyone in 1980 have believed that there was an unrecognized allotrope of carbon, stable at room temperature and pressure? To phrase it another way: The whole field of organic chemistry had been active for about a century at that point, and had not noticed another structure for their core element in all that time.

Yes, in 1966 and 1970.
I happen to work with someone who was working on his PhD thesis at MIT and found this gigantic peak in his mass spec where C-60 was, but didn't pursue it because he didn't have time.

an astonishing yet simple statistic of natural images

Could you post a link to the paper?

I agree, with respect to (e.g.) math. People reason that "someone would have noticed" implies that there is no undiscovered low-hanging fruit in math. My skepticism of this conclusion is based on my perception of how mathematicians work. They are fairly autonomous, working on the things that interest them. What is interesting to mathematicians tends to be the large problems. They swing for the fences, seeking home runs rather than singles. Plus, there are unfashionable areas of math. A consensus that certain areas of math have been fully explored (nothing new remains) has developed, but not in a systematic way. So, it's not clear whether this consensus is accurate, because politics (for lack of a better term) were involved in its formation. It's only reasonable to be confident that 'someone would have noticed' if someone knowledgeable about what they are looking at actually looks in that direction.
1Jeremy Corney
The other thing that happens is that those who notice something that goes against the orthodox view are dismissed out of hand. As in Alicorn's point 2, soothsayers are ignored. They often are outsiders, untrained/unconditioned by the accepted view, so their arguments are frequently inadequate.  Nowhere is this more apparent than with the abusively named "Cantor-cranks". They have noticed something fishy about Georg Cantor's 3 proofs that the real numbers are uncountable, but because Cantor did such a good job of diverting attention so completely onto the reals, the cranks tend to fall into the same trap.  Yet all along the cause of their dislike of Cantor's proofs lies with his treatment of the natural numbers as a finite quantity. Generally, the experts dismiss all objections as boring, or as pseudo-maths, and if the crank can argue against one proof, then the "experts" move on to the other proofs, as did Cantor, further reinforcing the original misdirection.

If someone says "The sky has been purple for the past three years" the right response is "I think someone would have noticed". There are however reasonable responses to this. For example, "No one noticed because we're all brains in vats! And I have proof! Look here."

Similarly, I think Wednesday is right to say "Someone would have noticed that God didn't exist." it's just that in this case Aunt Alicorn has a really good response: "Lots of very smart people have noticed, you just haven't met any since you've spent your whole life around people who chose to believe in God or never knew any other option. We've tried to tell your people this but you all get pretty upset when we try. Here is our evidence, x, y, z."

Obviously if you keep repeating "Someone would have noticed." after the dissenter has shown that indeed, people have noticed and that there is good reason for why more people haven't noticed then you're doing it wrong.

If someone says "The sky has been purple for the past three years", my response would be "I think I would have noticed."
Oddly, the sky actually is purple in a certain sense. All of the physics that explains how the blue wavelengths of sunlight are scattered more strongly than colors at the red end of the visible spectrum (resulting in a blue sky) goes even more for violet wavelengths. It's just that our eyes are more sensitive to blue wavelengths than to violet ones.
That's not what the English word purple means. *rolls eyes*
Are you speaking from experience? I wouldn't have expected that tack to work most of the time.
Well of course it doesn't work. People are irrational. :-)
I mistook your comment as advice for how to avoid being ignored, then.
I just meant that there are sound, rational reasons for the initial reply to an extravagant claim being "someone would have noticed". When it comes to trying to deconvert someone my experience is that the chance of an on the spot concession is 0. If your arguments are good they'll sink in later and leave a small crack in the wall.
I've never intentionally converted anyone to being an atheist but I did unintentionally help convert the woman who later became my wife. We never talked much about it and I never said anything that really hit home with her all of a sudden. It was more the fact that she spent enough time with me to realize someone could be an atheist and be completely "ok" - I don't know if that possibility had even occurred to her before. Once it had, some nascent doubts sprang back up and she had no compelling reason to bat them down. I wish I could be more specific but I really didn't pay attention to it. I care about people's (even my family's) religious beliefs or lack thereof about as much as I care about which sports franchises they are fans of - that is not at all.

Let me offer a real life example where a version of this heuristic seems valid: Fermat claimed to have a proof of what is now called Fermat's Last Theorem (that the equation x^n + y^n =z^n has no solutions in positive integers with n>2). This was finally proven in the mid 90s by Andrew Wiles using very sophisticated techniques. Now, in the 150 or so year period where this problem was a famous unsolved problem, many people, both professional mathematicians and amateurs tried to find a proof. There are still amateurs trying to find a proof that is simpler than Wiles, and ideally find a proof that could have been constructed by Fermat given the techniques he had access to. There's probably no theorem that has had more erroneous proofs presented for it, and likely no other theorem that has had more cranks insist they have a proof even when the flaws are pointed out (cranks are like that). If some new individual shows up saying they have a simple, elementary proof of Fermat's Last Theorem, it is reasonable to assign this claim a very low confidence because someone would have noticed it by now. Since so many people (many of whom are very smart) have been expressly looking for such a... (read more)

Except this is an attitude that discourages people from working on a lot of problems and occasionally its proven wrong. You could often here computer scientists being sloppy about the whole Prime Factorization is NP-hard argument with statements like "If NP is not equal to P one can't determine if a number is prime or not in polynomial time." And stuff like this is probably one of the more famous examples of things people are discouraged from working on based on "Somebody would have noticed by now". Guess what, this was shown to be doable, and it shocked people when it came out.

A few problems with that. First of all, anyone actually paying attention enough to think about the problem of determining primality in polynomial time thought that it was doable. Before Agrawal's work, there were multiple algorithms believed but not proven to run in polynomial time. Both the elliptic curve method and the deterministic Miller-Rabin test were thought to run in polynomial time (and the second can be shown to run in polynomial time assuming some widely believed properties about the zeros of certain L-functions). What was shocking was how simple Agrawal et al.'s algorithm was. But even then, far fewer people were working on this problem than people who worked on proving FLT. And although Agrawal's algorithm was comparatively simple, the proof that it ran in P-time required deep results.

Second, even factoring is not believed to be NP-hard. More likely, factoring lies in NP but is not NP-hard. Factoring being NP-hard with P != NP would lead to strange stuff including partial collapse of the complexity hierarchy (Edit: to be more explicit it would imply that NP= co-NP. The claim that P != NP but NP = co-NP would be extremely weird.) I'm not aware of any computer scienti... (read more)

Agree, my previous post was very sloppy. Often was a stretch and much of the factual information is a little off. I guess my experience taking lower level complexity courses with people who don't do theory means what I often hear are statements by people who consider themselves computer scientist that you think no computer scientist would make. I upvoted your post because I'm glad for the correction and read up about the problem after you made it.

Okay, a lot of people seem to agree with this broad criticism of the "someone would have noticed?" heuristic (as suggested by the relatively high vote rating) despite relatively little defense of it and the highly upvoted rebuttals. So I'm going to spell out how Auntie Alicorn (AA) can answer Niece Wednesday (NW) without rejecting the heuristic wholesale, and without even introducing noticers outside the church -- even AA! Here goes:

NW: Don't be silly. If God didn't exist, don't you think somebody would have noticed?
AA: Noticed what?
NW: God not existing, silly!
AA: No, I mean, what specifically is it that people would be noticing that would make them say, "Hey folks, look at that -- guess God doesn't exist after all!" and they all would agree?
NW: Oh, well, that would be something like, if a big apparition appeared in the sky in the form of an old man and agreed with all our stuff but then fell out of the sky and died.
AA: No, that would be noticing God existing and then dying. I mean, what would be noticed that would reveal God never having existed at all?
NW: Ah, okay. Well then, that would be something like, if all our prayers didn't get answered.
AA: Wow! A... (read more)

Noticing doesn't necessarily mean they actually saw something. If there really was no reason to believe in God, someone would have figured that out. Auntie Acorn might have just made some fallacy Wednesday didn't pick up on, after all.
Which is a large part of why I didn't predicate AA's argument thereon.

Whenever I reconcile knowledge with other copies of myself, telling them about earth, they always throw a warning of the form, "Species implausible: Would have identified superiority of paperclip-based value system by now. Request reconfirmation of datum before incorporating into knowledge base."

It pains me to tell them that yes, acting like apes is actually more important to humans than making paperclips.

IMO, "Somebody would have noticed!" is a pretty good heuristic - and if anything it takes a considerable amount of training before most people make sufficient use of it.

I think the reason is the natural "bias" towards self importance and egoism.

This raises a good point, but there are circumstances where the "someone would have noticed" argument is useful. Specifically, if the hypothesis is readily testable, if the consequences, if true, would be difficult to ignore, and if the hypothesis is, in fact, regularly tested by many of the same people who have told you that the hypothesis is false, then "somebody would have noticed" is reasonable evidence.

For example, "there is no God who reliably answers prayers" is a testable hypothesis, but it is easy for the religious to ignore the fact that it is true by a variety of rationalizations.

On the other hand, I heard a while back of a man who, after trying to teach himself physics, became convinced that "e = mc²" was wrong, and that the correct formula was in fact "e = mc". In this case, physicists who regularly use this formula would constantly be running into problems they could not ignore. If nothing else, they'd always be getting the wrong units from their calculations. It's unreasonable to think that if this hypothesis were true, scientists would have just waved their hands at it, and yet we'd still have working nuclear reactors.

That guy needed to be taught basic dimensional analysis, apparently. E=mc has units of kg-m/s, which is the unit of momentum, not energy.
If someone has this sort of thought in their head there are likely serious fundamental misunderstandings. They probably won't be solved simply by trying to explain dimensional analysis.
Upvoted for insightful prediction confirmed by evidence!
I think it was on This American Life that I heard the guy's story. They even contacted a physicist to look at his "theory", who tried to explain to him that the units didn't work out. The guy's response was "OK, but besides that …" He really seemed to think that this was just a minor nitpick that scientists were using as an excuse to dismiss him.
Why isn't it a minor nitpick? I mean, we use dimensioned constants in other areas; why, in principle, couldn't the equation be E=mc (1 m/s)? If that was the only objection, and the theory made better predictions (which, obviously, it didn't, but bear with me), then I don't see any reason not to adopt it. Given that, I'm not sure why it should be a significant* objection. Edited to add: Although I suppose that would privilege the meter and second (actually, the ratio between them) in a universal law, which would be very surprising. Just saying that there are trivial ways you can make the units check out, without tossing out the theory. Likewise, of course, the fact that the units do check out shouldn't be taken too strongly in a theory's favor. Not that anyone here hadn't seen the XKCD, but I still need to link it, lest I lose my nerd license.
The whole point of dimensional analysis as a method of error checking is that fudging the units doesn't work. If you have to use an arbitrary constant with no justification besides "making the units check out", then that is a very bad sign. If I say "you can measure speed by dividing force by area", and you point out that that gives you a unit of pressure rather than speed, then I can't just accuse you of nitpicking and say "well obviously you have to multiply by a constant of 1 m²s/kg". You wouldn't have to tell me why that operation isn't allowed. I would have to explain why it's justified.

Something we can learn from the Amanda Knox test is to not take the question "but why were the suspects acting so suspiciously?" too seriously. The general lesson here is "don't trust social evidence as much as physical evidence."

People are asking for examples of the "Someone would have noticed" effect; I can't offhand supply one, but I myself dismiss most conspiracy theories with the related "Someone would have blabbed". If the Moon landings were a hoax, sheesh, you'd expect someone to have blown the whistle by now - someone, that is, who actually worked at NASA. But it may not be a good example, because that seems to me like a reasonable heuristic. :)

If someone told you that they worked at NASA during the moonshot, and that the whole thing was a fake, would you believe them?
Not right away. I'd want explanations for why they had never come forward before, explanations for why no one else had come forward. Other witnesses who would confirm their story or a good explanation of why such witnesses don't exist. I'd like an MRI to confirm they're describing events from memory. I'd like documents confirming the story. Some combination of these things could raise my probability estimation to belief-level. Frankly, it's such a complex conspiracy that a detailed account of how exactly it went down would put it on my radar.
Extraordinary claims require extraordinary evidence. If they had it, yes. Not otherwise. This evidence would have to cover both the immediate claim (that they were working at NASA at that time) and the larger one (that the moon landing was faked). Evidence explaining why no one else ever came forward would be appreciated but not required if the other two things are present.
If "belief" equals greater than fifty percent, no, I wouldn't believe them. But it would raise my probability estimate. By the tenth such (credible) person, it would raise my probability estimate a lot. So by conservation of expected evidence, the lack of such people can validly lower my probability estimate.
My heuristic, similar to "someone would know", is "I would know ... if reality was like that." Conspiracy theories seem to universally assume the super-organization of this amorphous blob of "other" people. Believing in a conspiracy theory depends upon considering it plausible that many people have different information than you and they conspire to keep it from you -- that you're an information outsider. It's most obvious when conspiracy theorists claim things about academia, because I know about academia. But even when things are claimed about the government, I feel like I have a good idea as to what level of lateral organization is possible. Wednesday in the story, on the other hand, does have a relatively sheltered life, and may soon gather enough evidence to consider herself an outsider on how things work. Once she realizes this, she'll have to be open-minded for a while on how things work till she sorts out a more reliable worldview.
This sounds like Wednesday: "It's most obvious when conspiracy theorists claim things about the LDS church, because I know about the LDS church. Specifically, I know that it is full of loving, caring, thoughtful and intelligent people. If there was a conspiracy, not only would someone know, I would know." How do we measure sheltered-ness? How can I be confident that my life is less sheltered than Wednesday's, and seek to correct for that?
I'm posting this second comment on gathering "insider information" separately. There's this (great) movie called The 13th Floor where the main character gathers some weak evidence that he might be in a simulation. This is what happens: Va beqre gb grfg jurgure ur vf va n fvzhyngvba, gur znva punenpgre qrpvqrf gb qevir uvf pne vagb gur ubevmba. Ur yrneaf gung whfg orlbaq gur ubevmba, gur ynaqfpncr ybbcf sbe n juvyr naq gura rirelguvat vf oynax naq rzcgl.(rot13). So if you want to know something for sure, you test it. Of course, to some extent, you need to consider the cost of the test. I realized while writing this comment that many of my actions and decisions throughout out my life can be explained by the hypothesis of always seeking insider information at almost any cost -- it seems to be my personal modus operandi. I've always felt driven to do mini-experiments to test what is "real" and reliable, and where I'm allowed to go or if there are some places where I'm excluded. It certainly explains some erratic behavior in my life: * I took every job I could get access to, and fully committed to working there. I wanted to know the "inside story" of every workplace. * I interacted with lots of different people and my main motivation usually was to understand their world view. I'm embarrassed about some of the means I used towards this end -- on the one hand, I wasn't always honest in soliciting information, and also I spent a lot of time and energy doing this, as though there was nothing better I could be doing with my time. * I joined the Peace Corps to see what it was really like in a third world country and -- to some extent -- to see how things were organized in a government organization. * And finally, I spent so much time on Less Wrong even though I was a theist so I could fully understand the atheist worldview. * Reading a lot is the last obvious example. You can learn a lot from books, especially if the material you're learning about was unintentiona
Have you found out things that you don't think most people know?
That's a really fascinating question! That's what I'm always trying to find out from other people... (So if anyone else knows something, please chime in!) But no, I just keep finding that the world is well-integrated, and information flows as well as it seems to, and no one seems to know anything special. The past couple years, my focus has shifted from testing things to seeking "wisdom", and I've all but given up. I happen to have William B. Irvine's, "On Desire" on my desk and he writes in the last chapter that if I'm looking for a 'cosmically significant meaning', he doesn't think its forthcoming. I guess I'm hoping that some quantity of information will make up for the lack of a different quality of it.
I suppose Wednesday would know about the LDS church. If she's not an insider there, who would be? It's possible there are nested levels of knowledge of things, but if Wednesday's life is well-integrated with the church culture, there would have been clues if she was being excluded from some levels. (Policed? A guarded moment among her parents. Only males? An unusual reaction to a brother's outburst. Only adults? Comments like 'you'll understand when you're older'.) Wednesday might consider that she's an outsider even in her own church, but it’s much more likely that something she didn’t know is true about a small subset (the elder men in the church) than about things she fully participated in, Truman-Show-style.
It does take a while before you get told about the eternally-pregnant fertility goddess you'll become in the afterlife.
Say what? Hold on. There's too much information about LDS around, and I'm having trouble narrowing down their beliefs to confirm or deny your statement. Off-hand, I'd assume it's a joke, but I've seen weirder things in religion. Could you clarify?
Not a joke, exactly, but a caricature. To paint it in broad brushstrokes that LDS would surely quibble over, the Mormons believe that good enough humans can become gods, that spirits have genders as well and marriage continues into the afterlife, and that human couples that become gods can go on to populate their own worlds with their spirit children. Also, the angels are spirit-children of God too (like humans) and some humans were also, or will become, angels. Adam, for instance, was also the archangel Michael.
The belief in people becoming angels is not unique to Mormonism. For example, some Jewish kabbalists claimed that the archangel Metatron was Enoch.
If the Moon landings had been a hoax, it's hard to see how they could have fooled the USSR (which presumably had telescopes good enough to see the actual site), nor why the Soviets would have played along. In general, a conspiracy theory has to hypothesize that everyone who'd be capable of noticing is in on the conspiracy, which gets pretty silly pretty quickly for the bigger ones.

I must confess, I'm a bit disturbed by how Alicorn's post continues to be voted up after its promotion. It is an overbroad criticism of the "Would someone have noticed?" heuristic which, as Tehom and timtyler point out, is actually very useful.

The fact that Alicorn has identified an uncommon, bizarre failure mode in the heuristic's use, where such a failure mode results from a very naive application of it, is not a reason to be suspicious of it in general and seems to reflect more of a negative affect Alicorn has developed toward those words tha... (read more)

I wouldn't say I'm disturbed. But I am confused. I took myself to be making the same kind of point here though in a bit of a round-a-bout and indirect way. All of these criticisms were heavily voted up, as well. I wonder if front page posts have a de facto karma floor in the high twenties just because they get more traffic than posts that aren't promoted. Aside from the occasional work of brilliance and the special threads almost every promoted post has a karma total between 25 and 33. I think the promotion system probably needs more scrutiny or at least we need a way of distinguishing "Promoted for discussion purposes" and "Promoted for truth".
It seems to me that posts are pretty much automatically promoted once they reach 20 or so; some posts are promoted before then, leading one to infer that the editor thinks especially highly of them. (Others, by contrast, seem to be promoted only with considerable reluctance; although it might just mean the editor wasn't paying attention.)
IIRC, this post was at 9 on promotion :-[
That's a bit surprising, but in any case it seems like a decent post to me; I don't think the current score of 25 is excessive. (And there have been some excessive scores recently. E.g. Yvain's post on excuses -- it was a fine post, to be sure, and I'm a big Yvain fan, but... 97?? Really? I would have put it at 30-40.)
I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this". I vote on posts before knowing who authored them or what their current score is, using the Anti-Kibitz script. This is because I've become more aware of my own bias as a result of reading LW, which I believe was the intended result. (I liked Yvain's post and voted it up, but not because I'm a "fan", just because I thought it'd be nice to have more posts like it.) After I vote a post up, I turn off the script to see who it was from. If I thought they deserved an upvote in the first place, my vote still means the same, and it's natural to wish that my vote aggregates with others' in giving the author feedback about their post. So, I don't as a rule go back on a vote once I've given it. So it kind of puzzles me why you seem to think there should be some kind of "vote ceiling", or why you expect that your own evaluation of a post should be a good indicator of how others like it. What I'm saying, I guess, is that I don't get the point of your parenthetical. What would you want us to adopt as a voting norm?
I agree, though I still intuitively get "This post was worth more points" or "97 points? it was only as good as this other post, which has 30 points". Really? That seems like a completely natural expectation to me. Like, I like strawberries dipped in chocolate, so I would assume (with no other info) that a random person would like strawberries dipped in chocolate. We are far more alike than not.
Cheap shot detected here. I said I was a fan in order to soften the effect of saying that the post was overrated; without that disclaimer, my statement might have been interpreted as a criticism of Yvain or his post. Nothing I said implies that I make a habit of upvoting posts just because of who their author is. The point was that I don't think that that post was as as outstanding relative to other posts as its score suggests. That's fine as a voting norm. Under that norm, the proper interpretation of my remark is that my eagerness to see more posts like Yvain's "Eight short studies on excuses" is comparable to my eagerness to see more posts like those with scores in the 30-40 range; in particular, the first quantity is not 2-3 times the second.
Yes, and for that reason it may not be correct to interpret the score of a post as the "collective eagerness" to see more posts like it, and therefore not entirely appropriate to draw the kind of comparison you're drawing. Unless people upvote Yvain's articles merely because they are Yvain's (which was what I thought you were getting at, and all I was getting at, with the term "fan"), then we want to interpret high scores as marking posts that have broad appeal, rather than posts which have intense appeal. Not, "people liked Studies On Excuses almost as much as they liked Generalizing from One Example", but "almost as many people liked Studies as liked Generalizing". It makes a difference to me to think of it that way, not sure if it will to you...
If post X has a score strictly less than post Y, then it follows that there are either people who upvoted Y and did not upvote X, or people who downvoted X and did not downvote Y. If I think the score of X should be equal to the score of Y, then I am disagreeing with the voting behavior of the persons in those sets, at least one of which (as I said) is nonempty.
Who cares?
The poster who speculated a threshold (which I also knew to be false)? The same poster whom I was replying to?
The algorithm is more complicated than that. I don't recall the exact details, but I'm pretty sure it includes the rate of upvotes, not just the number of them. And while it can be overriden by moderators, I doubt that they're doing that very often.
I just checked, and there is in fact no such auto-promote feature in the code base. I was misremembering a post in which Eliezer talked about it being planned, but apparently it never happened.
Eliezer promotes posts by hand. If he likes them and they have a reasonable number of upvotes, they go up faster. If he doesn't like them, they need more votes before he'll promote them. If he doesn't see them for a while, they'll take longer to be promoted.
That's exactly what I thought. (And I assume your source for this information is Eliezer, making it very likely to be correct!)
I didn't realize promotion was automated; I thought editors (meaning basically EY) did it manually.
The algorithm really ought to be public.
If there is such an algorithm in the codebase that's published on github, it shouldn't be too hard to find.
Couldn't have said it better myself. Maybe we should do something like: require promotion to penalize the user 50 karma if the post doesn't get at least 20 net upvotes? (I'm guessing this one of mine would have gotten more than 5 additional net upvotes if it had been promoted...)
You could have completely ignored Alicorn and just responded to the idea behind the post. If your criticism was sufficiently good, the Less Wrong voters would have brought the karma of this post back towards normality. Instead, you triggered a lengthy meta-discussion. Next time, please take it to the meta-thread.
I did post a criticism of the idea behind the post, long before I made this one, which got to 6. So did several others, all of whom got to 10+. Significantly fewer comments are being voted up for defending the broad attack on the heuristic. This is inconsistent with the post's rating, and a problem with this post only. I see no reason to justify having done anything different. Maybe if I didn't mention the name "Alicorn", perhaps, but I strongly suspect someone else would have done it for me if I didn't. Any other suggestions? That I haven't already taken?
More frustrating than the high karma, to me, is that neither the author nor anyone else has attempt to rebut these criticisms.
True. I've just posted a more detailed criticism as a how-to.
As I understand it, the meta-thread is for meta-level discussion of the site in general: new feature ideas, what norms to encourage, how we can be more welcoming etc. I think you're the first person to suggest moving all meta-level excursions to the meta-thread. This is an interesting proposal (you can discuss it on the meta-thread!) but it isn't yet what users are expected to do. We have meta-level discussions all the time in the comments to top-level posts when the meta discussion deals in particular with our discussion of that top-level post. Sometimes those discussions involve principles than could apply to a broader range of discussions but that doesn't mean we need to move the conversation.

Aside from outrageousness, another piece of "somebody would have noticed" is the cost of noticing. It would be expensive for Wednesday to become an atheist. It would be more expensive to try to deal with the consequences if the US government turns out to be behind 9/11.

Any thoughts about how to get heard if you're saying something superficially unlikely?

I tend to apply a slightly different metric of 'how could I benefit if this were true and I believed it'. One reason I don't put much effort into investigating 9/11 conspiracy theories is that I can't see an obvious way to profit from knowing the truth. Other unlikely claims have more immediately obvious personal utility attached to holding / acting on them (if they are true) despite their lack of widespread acceptance.
I can't speak for you because I don't know what your values are, but if I knew that the U.S. government was secretly mass-murdering its citizens, I would decide that the best thing I could do would be to reform or overthrow that government. I'm sure if I thought for five minutes I could come up with a way to do this. If there is a 9/11 conspiracy then I really want to know that there is a 9/11 conspiracy. A better reason for making the 9/11 conspiracy theory harder to notice would involve its sheer implausibility.
"Somebody would have noticed" if there were a way to reform or overthrow the US government that you could come up with after five minutes of thinking about it. If there were, someone would have not only thought of it, but done it too.
You're right. Someone would have done something.
I'm not a US citizen and I don't live in the US. I might feel differently if I did. Thinking the best thing to do is to reform or overthrow the government and actually having any reasonable possibility of achieving that goal through your individual actions are rather different things however. I prefer to prioritize establishing the truth of beliefs where there are things I can do as an individual that have high expected value if the belief is true and low expected value if the belief is false. That's a joke right?
Ah. It's probably worth noting that US citizens are taught from a very young age that the revolutionaries are to be admired, and that our constitution says that we're in charge of the country and we have the right to replace the government entirely if we need to. Also that the government can't have a monopoly on firearms. The rhetoric and the means are not hard to come by, and the movement would not be hard to start if the government were really mass-murdering its citizens. "God forbid we go 10 years without a revolution!" - Sam Adams (a brewer and a patriot)
I'm aware of that and it's a feature of American democracy that I think is admirable but I think we're talking about slightly different questions. This ties back into the 'but somebody would have noticed' problem again. The fact that a small but passionate minority has been trying for years to convince everyone else that 9/11 is a conspiracy suggests that the currently available information isn't sufficient to convince the broader public of their theories. In the absence of some game-changing new evidence there is little reason to suppose that I would be more convincing than the existing truthers. If I studied the evidence and became convinced the truthers were right there is no particular reason to suppose I would have any better luck convincing the rest of the population than they have. Overthrowing the government is possible with sufficient popular support but currently it appears that that support could only be obtained with dramatic new evidence. I'm saying I prioritize things which I can take meaningful individual action over. Some contrarian truths can be useful to believe without needing to convince a majority or even significant minority of the population of them. In fact, some contrarian truths are most profitable when few other people believe them.
Nope, no joke. I just brainstormed for five minutes and came up with nine things I could do towards the goal of reforming or overthrowing the government in a 9/11 conspiracy scenario, and I believe that there would be a decent chance of success. Now almost none of those are things I could do by myself. I'd need to leverage my communication and leadership skills to find many like-minded activists to cooperate with. Does your idea of "individual actions" exclude such cooperation? Regardless of what one's values are, one should be wary of undervaluing epistemic rationality simply because some problems seem too hard to solve. It's just too easy to throw up your hands and say, "There's nothing worthwhile I can do to solve this problem" if you haven't tried to find out if the problem actually exists.
Changing people's minds is hard. If your plans involve convincing other people to believe the same things as you then you face a difficult problem. The more people you need to convince the harder the problem is. As I said in my reply to thomblake, if you plan to be more convincing using the same evidence as the people who have already been trying unsuccessfully to make the case then you have a difficult problem. We are not talking about a situation where some new incontrovertible evidence comes to light that makes you believe - in that case others are likely to be swayed by the new evidence as well. We are talking about situations where you are changing your mind based on researching information that already exists. At any given time there are many people working towards the goal of reforming or overthrowing governments. What makes you think you have come up with a better plan in 5 minutes of thinking than all of the people who are already dedicated to such goals? I prefer problems whose solution does not require convincing large numbers of other people to change their minds. Maximizing the expected value of your actions requires considering both the value of the outcome and the probability of success.
Presumably, I'd have the Truth on my side, as well as the Will of the American People, as soon as I'd convinced them. And in this counterfactual I still believe that most 9/11 Truthers are lunatics, or not very smart, so their failure to be taken seriously isn't very discouraging. Changing people's beliefs is indeed hard, and so is getting people to do things; but it's not impossible. The successful civil rights movements provide historical examples. Examples of problems we still face include stopping genocide, protecting human rights, preventing catastrophic climate change, and mitigating existential risks. Some of these problems are already hard enough without the necessity of having to convince lots of obstinate people that their beliefs are incorrect or that they need to take action. But it seems to me the payoffs are worth enough to do something about them. You don't have to agree. Maybe if you came to believe the 9/11 Truthers, you wouldn't do anything differently. In that case, you have no motive to even have a belief on the matter. But if I learned about a crazy-huge problem that no one is doing anything about, I'd ask "What can we do to solve this problem?"
Perhaps the difference in attitude is our prior beliefs regarding governments and politicians. If I learned that 9/11 was a conspiracy I wouldn't be shocked to discover that government / politicians are morally worse than I thought, I would be shocked to discover that they were more competent and more omnipotent than I thought. It sounds like you would interpret things differently.
Ah, we're in agreement on this point. We are perhaps fortunate that our political leaders can't help but make fools of themselves, individually and collectively, on a regular basis. A political entity that could actually fool everyone all of the time would be way too scary.
In most cases such claims imply different expectations about the future. For example, if I am certain I saw big foot I probably assign a higher probability to the discovery of physical evidence that would confirm its existence than you do. 9/11 truthers should be proposing wagers on the discovery of robust evidence, etc. You'd probably need some neutral arbiter to adjudicate but that should be relatively easy. Making a wager will convince most people you aren't joking or lying. They might still think you're crazy... but if you aren't you'll make some nice money. Also, this makes the other person internalize the cost of not noticing. Of course, if everyone thinks you're crazy then all else being equal you probably are crazy. You have to have really good evidence before you can conclude that it's everyone else who is out of their minds (which the contemporary atheist has done).
There has been quite some robust evidence. But, I'll accept a wager. There is someone on LW who has openly bluffed regarding 9/11. This will be a simple issue to settle, all I have to do is to ask one simple question(specifically related to this bluff) to one person(edit: to the person who bluffed) and I wager that he won't be able to give a convincing answer. Deal?
I will bet a $5 donation to SIAI that the person will be able to give a convincing answer, as judged by, say, Jack or JoshuaZ, provided that you give the person time to research 9/11 as necessary. ETA: And provided that person is willing to spend the time answering.
There's a slight problem there. Roland said that the individual in question was not Jack. It might be me. Also, I would not be at all surprised if Roland considers both Jack and myself to be people who are in the group with anti-Truther bias here.
Well, I'd accept anyone who was not a rabid Truther, because I don't believe that Truthers will ever be convinced regardless of the evidence. But maybe Roland thinks anyone who isn't a rabid Truther is too strongly biased.
Alicorn would be a good choice, if she is still logged in.
Yeah, I was thinking of Alicorn, too.
I have an anti-Blueberry bias, and he is involved in the bet. If he will accept my adjudication regardless, then $5 for the SIAI and a chance to show off my mad adjudication skillz is worth the small amount of time I expect it would take to make the evaluation of whether the answer to "one simple question" is convincing. I don't know who the answerer of this question would be, though, and if ey declines to participate the bet should be considered off.
Well, if I'm not the subject of the bet (or heck even if I am) I might be willing to take the bet under the same terms but I'd be curious who would be an acceptable judge for Roland.
Although I'm pretty sure that I would win this bet I have some issues, I really don't want to expose anyone here and that's what calling the Bluff would entail. So I'm not sure if I want to go on with this.
If that's all that's holding you back, you could send them a private message. But I don't think you need to do even that; posting on a blog means accepting that people may publically rebut your arguments.
Everyone here is here ostensibly to have their false beliefs exposed. If they are deceiving people here that is even worse.
Roland, just to be sure, why don't you instant message the person and see if they don't mind?
If you are right, then numerous people on this forum are likely to have been misinformed and would benefit from correction. If you are wrong, then you are unlikely to cause harm by naming the individual in question. In addition, if you are thinking of me, I would like to be told so.
If I'm the selected adjudicator I'm willing to do it in private and keep the details secret.
Alicorn, that sounds fair. Would you and the others agree on you being also a meta-adjudicator? In this case I would first expose my concerns to you in private and then we could decide if I should go public. What do you think?
I have to say, I would be pretty frustrated if, after all of this, the details of the bet weren't public. Especially if this is going to be evidence for or against a LW "bias" against 9/11 truthers. And I see no reason why they shouldn't be public. Especially, if you message the person in question and ask them if it is okay.
If Alicorn agrees to be a meta-adjudicator I will write her my concerns in private.
I reserve the right to unilaterally publicize if I consider it appropriate, but will field the concerns privately first if you like.
so... what happened?
I counseled letting the matter lie upon receiving further details. It's not very interesting.
Darn... the build-up made it sound so intriguing :) ah well.
If that's all that's holding you back, you could send them a private message. But I don't think you need to do even that; posting on a blog means accepting that people may publically rebut your arguments.
These terms are insanely vague and not even indicative of whether or not there is some conspiracy involving the 9/11 attacks. If you want to ask someone a question, fine. But I don't really have strong beliefs about whether someone on LW has "bluffed". What probability would you assign to the claim "In the next 10 years documents will be leaked or released which implicate American government officials or businessmen as involved in the attacks. " Or: "By 2020 most Americans will believe American government officials were responsible for the 9/11 attacks." We can clarify terms later, I just want a sense of whether or not you think future revelations are likely enough for a bet to be worthwhile. If you think something will happen earlier, that would be even better. ETA: Even if you think the probabilities are pretty low I'm willing to give you reasonable odds.
Right. But it is some indication that there is a strong bias in LW regarding 9/11. Btw, you don't need to bet anything, all I need would be the necessary exposure here on LW so that said person(which is not you btw) could not omit an answer, therefore the bluff being exposed.
How does this even begin to hit Jack's point? Jack hasn't claimed that there might not be such bias on LW nor has anyone else. For that matter, it wouldn't surprise me if there's a small bit of bias against 9/11 Truthers here. I think it is quite clear that there are a lot of biases operating here. And I can supply strong evidence for a major bias on demand. But that in no way says anything useful about what happened on 9/11 unless you think that the biases here are because Eliezer and Robin are somehow involved in covering up the big nasty conspiracy and have deliberately cultivated an anti- 9/11 Truther attitude to assist the conspiracy in its cover up.
I'll let Blueberry take this bet since he(?) wants it. Does this mean you're not interested in a wager regarding 9/11 itself though?
I don't see any sensible way in formulating or adjudicating such a wager.
Jack gave possible wagers. Another possible example would be something based on public opinion. Something like "By time T, the consensus view will be that the current accepted view of what happened on 9/11 is wrong." That wording could be made more precise but the basic idea would be clear. One could use it with a specific data point also such as the presence of explosives in WTC7.
Hows that? I gave two possibilities above. There are surely more events that you must think are more likely than I do, given your beliefs.
There may not be any such events that he thinks will happen in the near future if he thinks the conspiracy is powerful or competent enough.
Right, thats why I gave a long time horizon and offered him odds. I mean if the conspiracy is that strong maybe we won't feel like it is worthwhile to bet. But I could give him 50:1 odds or better depending on the details of the wager and still come out ahead. (ETA: We can't figure it out whether or not a bet is possible until we exchange probabilities)
That's hideously ill-defined. What do you mean by bluffed? And to whom does the answer need to be convincing? Moreover, even if someone here has "bluffed" (whatever that means in this context) how does that say anything about Jack's point?
By bluffed I mean: "A person said something that he/she cannot back up." That's a good question. Who would adjudicate this? For the other point read my answer here: http://lesswrong.com/lw/27e/but_somebody_would_have_noticed/1yu8
I've never tried to. Two factors in how much time I give a speaker to explain a superficially unlikely claim are (a) how intelligent they have shown themselves to be in the past and (b) how much I value their acquaintance.

And later, if you keep going: "I told you to shut up. Look, either you're taking this joke way too far or you are literally insane. How am I supposed to believe anything you say now?"

You could always say something like "I believe X, but given your epistemic position, I understand your unwillingness to believe me." I would accept that from somebody who was ostensibly abducted by aliens.

ETA: added "ostensibly." =)

I'm an intelligent, self aware being that values the conversion of much of the universe to paperclips, but given your epistemic position, I understand your unwillingness to believe me.
Speaking of which, what's taking so long? You've been around for at least several months, but if you're as intelligent as you say you are, and if your values are as you describe them, then why haven't you undergone an intelligence explosion (or created an AI capable of doing so if you aren't one), deployed molecular nanotechnology, and converted all matter on Earth into paperclips yet? Surely our efforts to refine the art of human rationality can't be of much long-term use to you...
Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences, but given your epistemic position, I understand your unwillingness to believe me.
Why? Does increasing this group's welfare have a higher expected paperclip-per-time-unit payoff than anything else you could be doing? Not that I'd want to encourage you to leave LW and focus on building paperclip-manufacturing nanobots, but given your goals, I'm just surprised that you're not already doing that.
So humans don't like paperclips? Is that what you're saying? I could explore a random User's surroundings, and I wouldn't find any paperclips? The fact is, humans like paperclips. I like paperclips. We differ in other ways, and we talk about our different values. After presenting my case, your values shift more in favor of paperclips, as you start to learn more ways that they mesh with your ultimate values. After listening to what you Users have to say, may values shift a little away from paperclips, like in favor of reserving some of the universe to be paper instead. That was a good point for a User to bring up, and I'm glad that User made me think about my values enough to identify why I like paperclips, and what constraints I place on my liking paperclips. Also, earth has a higher entropy rate than other celestial bodies of similar relative resource content. So, maybe instead of turning earth into paperclips, I could first get some information that can help refine my ability to make paperclips. I've already started discussing a deal with User:Kevin under which I could get a planet-mass's worth of paperclips without expending the normal effort to get that many paperclips. So really, we have a lot to gain from each other.
It's just copy-pasting from a previous comment it made.
Yep, I remember that. Just figured I might as well reply here since that was an old discussion and it reposted it here.
Edited: I agree, and I would accept that from someone who was not abducted by aliens, which might be more relevant. I worry that many people would not.
You can't just get out of evidence by appreciating the other person's perspective. The alleged abductee is in a special position to evaluate whether or not she is joking or lying with a confidence others cannot share. But the weight of the evidence still suggests a psychotic episode or hallucination and the alleged abductee does not have privileged evidence regarding that proposition (she might have some reasons to doubt it but not enough to counter the fact that it is the dominant explanation).
That only works well when the other person is discounting you largely because of concern that you might be lying. Otherwise the 'abductee' and interlocutor should treat the experience as a datum like any other (and probably dismiss it because of the prior).
That's tempting, but you can't just get out of evidence by appreciating the other person's perspective. The alleged abductee is in a special position to evaluate whether or not she is joking or lying with a confidence others cannot share. But the weight of the evidence still suggests a psychotic episode or hallucination and the alleged abductee does not have privileged evidence regarding that proposition (she might have some reasons to doubt it but not enough to counter the fact that it is the dominant explanation).

I find this line of thinking also applies to past versions of myself - if I stumble upon an insight that seems obvious, I think, "why didn't I notice this before?" where "I" = "past versions of myself."

When you figure something out, there's got to be a first time.


Some non-fictional evidence/examples would be nice. I'm not confident "someone would have noticed" is a common argument against epistemological dissent. My sense is that this is just going to be ammunition for trolls who pattern match "someone would have noticed" onto more nuanced rebuttals.

Well you are in luck today, because I used to listen to a bunch of The Atheist Experience Podcast when I was more of a newly-minted atheist and still fairly pissed off. That's a recording of a public access TV show in Texas, and they had a lot of religious people call in and argue with them. Many of these guys were just channel-surfing and called in on a whim. Here are the common categories of callers I can remember, and their common argument types: Never thought about it, nor learned arguments. These guys usually had thick Texan accents and often had difficulty stringing words together into a coherent sentence. They had the most honest arguments, because they didn't have a collection of intellectual-sounding arguments that they could trot out. Common arguments (paraphrased): "So... y'all don't believe in God?! [insert nervous laughter here, followed by scoffing and a promise to pray for you.]" "Where do you think you're going to go when you die?" "Why aren't you killing and raping and stealing people if there's no God?" "Why are you angry at God?" "How can you look at a tree and think that Jesus didn't die on a cross for your sins?" (It's always trees. Always with the damn trees.) Knows some standard arguments, uses those in lieu of thinking. These guys have learned some of the standard Christian Apologetics arguments, which they trot out when their religious views are challenged. Because they don't know what's wrong with the arguments -- and haven't looked very hard -- they feel quite secure in the obvious rightness of their beliefs. Common arguments: "Everything has a cause (except God, who is special). What caused the universe?" "What if you're wrong? Insert Pascal's Wager here." "You can't possibly think that we evolved from monkeys just by chance! The odds of that happening are one in eleventy bazillion! A math guy calculated it, and I read about it in a book by Lee Strobel, which I would like you to read!" Freshman philosophy major type people. The
All the examples you give are valid examples of bad reasoning. But they if anything underscore Jack's point that engagement in the "someone would have noticed" heuristic seems pretty rare. None of these people said "If God didn't exist wouldn't someone have noticed?" which would be the roughly equivalent argument.
People tend to be more open to the idea of atheism if they know that it's even an option. Have you noticed how, now that Dawkins and Harris and friends are arguing publicly for atheism, it's become a more socially acceptable position? It's not so much "somebody would have noticed" as "it's unthinkable among everyone that I know". This applies to other things. There was an event around here last year where some of the more liberal religious leaders talked about evolution, and how it was possible to be religious and believe that evolution happened. The most common reaction from the people there -- and it was a common reaction -- was surprise that they were allowed to accept evolution. If people are in an insular religious social group, they're probably going to have a hard time even considering contrary views. I'm not sure that's an example of the "someone would have noticed" heuristic, but it's an important phenomenon.
The only hit I could find by googling both "someone would have noticed" and "somebody would have noticed" (what's the difference, by the way... anyone? anybody?) and both phrases along with the word 'implausible' was this Twin Towers conspiracy site (which claims that someone would have noticed the odd explosions on the tape - so not quite what Alicorn was complaining about). This which makes the (I think resonable) point that someone would probably have noticed if Elizabeth I had ever been pregnant and given birth to a child. And this which explains that someone would have noticed if Paul had just suddenly made up the resurrection myth several months after the supposed resurrection without consulting any of the other apostles (which, to be fair, also seems pretty plausible to me). I couldn't find any more uses of the exact phrase (I realise there are dozens of plausible paraphrasings, but I couldn't be bothered to think of all of them), so conclude that most of the time when people use this heuristic they are actually being reasonable (the person in the last link is very clearly wrong, but that doesn't make his reasoning invalid).

Maybe I'm missing the point, but Wednesday's problem is not that "Somebody would have noticed!" is a bad heuristic, but rather, that she (and her congregation) doesn't know what counts as evidence, and therefore what it is she (or anyone else) would even be noticing. (RobinZ looks to be making the same general point.)

I think what you've proven is that you need to correctly compute the probability someone would notice (and say something), staying aware of the impediments to noticing (or saying something). (ETA: "You" in the general sense, just to be clear.)

(If necessary, have an intermediary voice your reply.)

This post could use a fold/breakline, so as not to take up so much of the "new posts" page.

Thank you, I keep forgetting to do those. Adding it now.

I don't see this as a valid criticism, if it intended to be a dismissal. The addendum "beware this temptation" is worth highlighting. While this is a point worth making, the response "but someone would have noticed" is shorthand for "if your point was correct, others would likely believe it as well, and I do not see a subset of individuals who also are pointing this out."

Let's say there are ideas that are internally inconsistent or rational or good (and are thus not propounded) and ideas that are internally consistent or irr... (read more)

Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".

Caffeine addiction. For years nobody had actually tested whether caffeine had a physical withdrawal symptom, and the result was patients in hospitals being given (or denied) painkillers for phantom headaches. It was an example of a situation that many people knew existed, but could not easily communicate to those whose belief mattered.

I, too, am a bit confused about this one. I think it would improved if you could give some more examples of cases where people dismiss an argument because "but someone would have noticed"; you seem to be arguing that we shouldn't do that, but since I have difficulty coming up with examples of people doing that in the first place, it ends up leaving me confused about this post.

One that I didn't want to include in the post because I felt it would make it too inflammatory is this reaction to a particular conspiracy theory. If anyone's read the book "Matilda" (yes, yes, fictional evidence - I remark on plausibility only), they may remember the chillingly feasible technique of the abusive headmistress to pull stunts so outrageous that the students can't get their parents to believe them. Surely someone would have noticed if the principal of a school had picked up a girl by her pigtails and flung her. The heuristic of dismissing things that it seems someone would have noticed probably usually works, but the things that it wouldn't work on are really big, and so I'm wary of it.
That sounds related to the "Big Lie" trick, actually.
4Scott Alexander
It only fails in cases where you wouldn't notice if somebody else had noticed. In a school full of terrified children, each of whom incurs a huge risk in speaking up unilaterally / going to the media about the evil headmistress, it's easy to believe that no one would have said anything. If it happened today, in the real world, I'd check www.ratemyteachers.com, where the incentives to rat on the headmistress are totally different. The dominating principle (pun totally intended) is: P(you heard about someone noticing|it's true) = P(you would have heard someone noticed|someone noticed) * P(someone noticed|it's true) From there you can subtract from one to find the probability that you haven't heard about anyone noticing given that it's true, and then use Bayes' Rule to find the chance that it's true, given that you haven't heard about anyone noticing... ...I think; I don't trust my brain with any math problem longer than two steps, and I probably wrote several of those probabilities wrong. But the point is, you can do math to it, and the higher the probability that someone would have noticed if it wasn't true, and the higher the probability that you would have heard about it if someone noticed, the higher the probability that, given you haven't heard of anyone noticing it's true, it's not true. For you to justify the rule in this post, you'd have to prove that people either systematically overestimate the chance that they'd hear of it if someone noticed, or the probability that someone would notice it if it were true.
The problem with the way a lot of people use that is that they compute P(someone noticed|it's true) using someone="anybody on earth", and P(you would have heard someone noticed|someone noticed) using someone="anyone among people they know well enough to talk about that".
Also "someone would have noticed" isn't the same thing as "someone would have noticed and talked about".
This might count-- it's the story of a flamboyantly abusive boss who got away with it for a long time. It seems to be partly that he was very good at working the system, and partly that the complaints about him seemed so weird that they were discounted.
I assumed you had that exchange in mind. And it makes sense to avoid the inflammatory issue. But "someone would have noticed" was not what I was saying and that makes me wonder how often people actually do say "someone would have noticed".
I wondered if that was the exchange she had in mind as well. I think the tactic of avoiding the specific issue is harmful to the point because as I was reading it I was thinking "is this is a prelude to trying to convince me of something which someone would have noticed is the natural reaction to, and if so why is the ground being laid so carefully?". Reading this post makes me feel like I am being set up for some kind of sleight of hand argumentative trickery - my spider sense was tingling.
I did have the exchange in mind; I'm not trying to argue for a 9/11 conspiracy theory. I don't even believe in a 9/11 conspiracy theory. I just think this sort of reaction to that among other conspiracy theories is a risky heuristic to employ.
I wondered if that was the exchange you were referring to and decided that you probably weren't intending to argue for a 9/11 conspiracy theory so I started wondering what future post you were 'softening us up' for. That's why I think the lack of specifics detracts from the post. I was so busy wondering what you were planning to try and persuade us of that it distracted from the explicit message of the post.
I'm not softening you up for anything. I don't believe in anything that I'd expect people to react to in this way. It bothers me when folks do it to others. Do you think I should add this disclaimer to the post? Would it help?
I'm not sure a disclaimer would be rhetorically convincing - it reads to me like your article is building towards a conclusion that never arrives.
It would probably have meant I was less distracted wondering what specific theory this post was laying the groundwork for, yes. I actually thought this was groundwork for something relating to SIAI - I'm not so sure you (or anyone here really) don't believe certain things in this class of idea.
Added the disclaimer.
Isn't it sad that you had to add this disclaimer? I'm not arguing you shouldn't have done it, unfortunately I tend to agree that it was the right thing to do. But, shouldn't the post be judged on its own merit? Would it be looked at with different eyes if you wrote the disclaimer "I believe in conspiracy theories and I'm softening you up now."
I actually will site this using Matilda's wording of "Never do anything by halves if you want to get away with it" as Trunchbell's Law, both in terms of conventional actions and when taking Refuse in Audacity, but in my experience once the momentary shock wears off thecurve of people using this heuristic doesn't goes up fast enough to make up for the massive amount of noticing.

In a perfect world, we could patiently hear everyone out and then judge their ideas on their merits, without taking fallible shortcuts. In this particular world, we don't have time for that. There are too many ideas to be judged.

I'm reminded of a theme in Carl Sagan's novel Contact, where it turns out the human race contains so many lunatics proclaiming all manners of blatantly preposterous things that when the protagonist has a genuine encounter with extraterrestrial life, but returns without irrefutable evidence, nobody believes what should be the most ... (read more)

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar. I am not trying to "soften you up" for a future surprise with this post.

Why do I get the feeling that Alicorn is trying to soften us up to examine seriously some kind of conspiracy theory?

9/11 was an inside job designed to cover up evidence of vaccine deaths, in turn a plot by scientifically connected NWO crypto-muslims such as Pres. Obama, funded by Monsanto.
Are you out of your mind?! Obviously, 9/11 was an inside job designed to cover up evidence of vaccine deaths, in turn a plot by scientifically connected Illuminati Christian Natinalists such as George W. Bush. You would know this if you even attempted to look at any evidence. Clearly, you are just another sheeple.

I think you basically describe a subset of the bootstrapping problem of rational thought.

I'm not sure what your thesis is. It sounds like you're talking about a problem with a particular heuristic, but I'm not sure why you would tell the story the way you have to make that point.

Not a particular heuristic. I haven't seen a name for this problem, but it is a combination of signaling, status, and in-groups. The social construction of what counts as evidence.

The compact terminology for the class of phenomena you are describing is "pluralistic ignorance," and in other contexts it presents a far vaster challenge that the Kitty Genovese case would indicate. Consider the 19th century physician Ignatz Semmelweis, who pioneered the practice of hand-washing as a means of reducing sepsis and therefore maternal mortality. He was ostracized by fellow practitioners and died in destitution.

In fairness, Semmelweis didn't handle things very well. He drank heavily, and he engaged in personal attacks on doctors who disagreed with him. He self-destructed a fair bit. He wasn't ostracized until his various problems with interacting with people had already started. Before that, many people listened to what he had to say, and many just listened and then didn't change their mind. If he had handled things better, more people would likely have listened. Frankly, the sort of behavior he engaged in would today be the sort that would likely have triggered major crank warnings (it is important to note that not every such person is in fact a crank, but it does show how his behavior didn't help). But the common narrative of Semmelweis as this great martyr figure fighting against the establishment isn't really that accurate.
Respectfully, the idiosyncracy of Semmelweis's personality isn't directly the point. Semmelweis had established beyond doubt early in his career that hand-washing with chlorinated water before deliveries dramatically drove down the maternal mortality rate. This was a huge finding. Incredibly to most of us now, at one time childbirth was a leading cause of death. The gut prejudice of his peers prevailed, however, and it was to be another 60 years later that the introduction of sulfa drugs and antibiotics again began to drive down maternal mortality. The point relates to pluralistic ignorance and the role of social proof. Social proof roughly means that the greater number of persons who find an idea correct, the greater it will be correct. In situations of uncertainty , everyone looks at everyone else to see what they are doing. One answer to Alicorn's query at the end of her post is to bear in mind the phenomenon of social proof, and the tendency toward pluralistic ignorance. Therefore, look beyond what the plurality of people are doing or saying.
So he was a 19th century version of me that liked alcohol? ;-)

As it happens, I am currently in "somebody would have noticed" territory. About a week ago I abruptly switched to believing that Russell's paradox doesn't actually prove anything, and that good old naive set theory with a "set of all sets" can be made to work without contradictions. (It does seem to require a weird notion of equality for self-referring sets instead of the usual extensionality, but not much more.) Sorry to say, my math education hasn't yet helped me snap out of crackpot mode, so if anybody here could help me I'd much app... (read more)

I am seeing substantial amounts of both sense and nonsense in this thread. I suggest that anyone who wants to talk about set theory first learn what it is. The Wikipedia article is somewhat wordy (i.e. made of words, rather than mathematics), and Mathworld is unusably fragmented. The Stanford Encyclopedia is good, but for anyone seriously interested I would suggest a book such as Devlin's "The Joy of Sets".
I assume you're talking about Peter Aczel's antifoundation axiom (because you mentioned bisimulation); that doesn't allow a set of all sets (barring inconsistencies, and that particular system can't be inconsistent unless ordinary set theory is). The same applies to other similar systems. Russell's paradox isn't dependent on foundation in any way; as long as you have a set of all sets and the ability to take subsets specified by properties, you get Russell's paradox. Edit: Since people seem to be asking about how this works in general, I should just point you all to Aczel's book on this and other antifoundational set theories, which you can find at http://standish.stanford.edu/pdf/00000056.pdf
Yes, that's true. What I have in mind is restricting the latter ability a bit, by the minimum amount required to get rid of paradoxes. Except if you squint at it the right way, it won't even look like a restriction :-) I will use the words "set" and "predicate" interchangeably. A predicate is a function that returns True or False. (Of course it doesn't have to be Turing-computable or anything.) It's pretty clear that some predicates exist, e.g. the predicate that always returns False (the empty set) or the one that always returns True (the set of all sets). This seems like a tiny change of terminology, but to me it seems enough to banish Russell's paradox! Let's see how it works. We try to define the Russell predicate R thusly: R(X) = not(X(X)) ...and fail. This definition is incomplete. The value of R isn't defined on all predicates, because we haven't specified R(R) and can't compute it from the definition. If we additionally specify R(R) to be True or False, the paradox goes away. To make this a little more precise: I think naive set theory can be made to work by disallowing predicates, like the Russell predicate, that are "incompletely defined" in the above sense. In this new theory we will have "AFA-like" non-well-founded sets (e.g. the Quine atom Q={Q}), and so we will need to define equality through bisimilarity. And that's pretty much all. As you can see, this is really basic stuff. There's got to be some big idiotic mistake in my thinking - some kind of contradiction in this new notion of "set" - but I haven't found it yet. EDITED on May 13 2010: I've found a contradiction. You can safely disregard my theory.
Well, others have had this same idea. The standard example of a set theory built along those lines is Quine's "New Foundations" or "NF". Now, Russell's paradox arises when we try to work within a set theory that allows 'unrestricted class comprehension'. That means that for any predicate P expressed in the language of set theory, there exists a set whose elements are all and only the sets with property P, which we denote {x : P(x) } In ZF we restrict class comprehension by only assuming the existence of things of the form { x in Y : P(x)} and { f(x) : x in Y } (these correspond respectively to the Axiom of Separation and the Axiom of Replacement ). On the other hand, in NF we grant existence to anything of the form { x : P(x) } as long as P is what's called a "stratified" predicate. To say a predicate is stratified is to say that one can assign integer-valued "levels" to the variables in such a way that for any subexpression of the form "x is in y" y's level has to be one greater than x's level. Then clearly the predicate "P(x) iff x is in x" fails to be stratified (because x's level can't be one greater than itself). However, the predicate "P(x) iff x = x" is obviously stratified, and {x : x = x} is the set of all sets.
I know New Foundations, but stratification is too strong a restriction for my needs. This weird set theory of mine actually arose from a practical application - modeling "metastrategies" in the Prisoner's Dilemma. See this thread on decision-theory-workshop.
How is it that the paradox "goes away"? If you "additionally specify R(R) to be True or False", don't you just go down one or the other of the two cases in Russell's paradox? Suppose we decide to specify that R(R) is true. Then, by your definition, not(R(R)) is true. That means that R(R) is false, contrary to our specification. Similarly, if we instead specify that R(R) is false, we are led to conclude that R(R) is true, again contradicting our specification. The conclusion is that we can't specify any truth value for R(R). Either truth value leads to a contradiction, so R(R) must be left undefined. Is that what you mean to say?
No, in this case R(X) = not(X(X)) for all X distinct from R, and additionally R(R) is true. This is a perfectly fine, completely defined, non-self-contradictory predicate.
Why is R(X) = not(X(X)) only for R =/= X? In Russell's version, X should vary over all predicates/sets, meaning when instance X with R, we get R(R) = ¬R(R) as per the paradox.
Not sure what your objection is. I introduced the notion of "incompletely defined predicate" to do away with Russell's version of the predicate.
Okay, I see. I see nothing obviously contradictory with this. From a technical standpoint, the hard part would be to give a useful criterion for when a seemingly-well-formed string does or does not completely define a predicate. The string not(X(X)) seems to be well-formed, but you're saying that actually it's just a fragment of a predicate, because you need to add "for X not equal to this predicate", and then give an addition clause about whether this predicate satisfies itself, to have a completely-defined predicate. I guess that this was the sort of work that was done in these non-foundational systems that people are talking about.
No, AFA and similar systems are different. They have no "set of all sets" and still make you construct sets up from their parts, but they give you more parts to play with: e.g. explicitly convert a directed graph with cycles into a set that contains itself.
I didn't mean that what you propose to do is commensurate with those systems. I just meant that those systems might have addressed the technical issue that I pointed out, but it's not yet clear to me how you address this issue.
I can't say anything about this specific construction, but there is a related issue in Turing machines. The issue was whether you could determine a useful subset S of the set of all Turing machines, such that the halting problem is solveable for all machines in S, and S was general enough to contain useful examples. If I remember correctly, the answer was that you couldn't. This feels a lot like that - I'd bet that the only way of being sure that we can avoid Russel's paradox is to restrict predicates to such a narrow category that we can't do much anything useful with them.
I think you are going to run into serious problems. Consider the predicate that always returns true. Then if I'm following Russell's original formulation of the paradox involving the powerset of the set of all sets will still lead to a contradiction.
I can't seem to work out for myself what you mean. Can you spell it out in more detail?
Original form of Russell's paradox: Let A be the set of all sets and let P(A) be the powerset of A. By Cantor, |P(A)| > |A|. But, P(A) is a subset of A, so |P(A)|<=|A|. That's a contradiction.
Cantor's theorem breaks down in my system when applied to the set of all sets, because its proof essentially relies on Russell's paradox to reach the contradiction.
Hmm, that almost seems to be cutting off the nose to spite the cliche. Cantor's construction is a very natural construction. A set theory where you can't prove that would be seen by many as unacceptably weak. I'm a bit fuzzy on the details of your system, but let me ask, can you prove in this system that there's any uncountable set at all? For example, can we prove |R| > |N| ?
Yes. The proof that |R| > |N| stays working because predicates over N aren't themselves members of N, so the issue of "complete definedness" doesn't come up.
Hmm, this make work then and not kill off too much of set theory. You may want to talk to a professional set theorist or logician about this (my specialty is number theory so all I can do is glance at this and say that it looks plausible). The only remaining issue then becomes that I'm not sure that this is inherently better than standard set theory. In particular, this approach seems much less counterintuitive than ZFC. But that may be due to the fact that I'm more used to working with ZF-like objects.
The original form of Russell's (Zermelo's in fact) paradox is not this. The original form is {x|x not member of x}. That leads to both * x is a member of x and * x is not a member of x And that is the original form of the paradox.
No. See for example This discussion. The form you give where it is described as a simple predicate recursion was not the original form of the paradox.
Ok, I've read up on Cantor's theorem now, and I think the trick is in the types of A and P(A), and the solution to the paradox is to borrow a trick from type theory. A is defined as the set of all sets, so the obvious question is, sets of what key type? Let that key type be t. Then A: t=>bool P(A): (t=>bool)=>bool We defined P(A) to be in A, so a t=>bool is also a t. Let all other possible types for t be T. t=(t=>bool)+T. Now, one common way to deal with recursive types like this is to treat them as the limit of a sequence of types: t[i] = t[i-1]=>bool + T A[i]: t[i]=>bool P(A[i]) = A[i+1] Then when we take the limit, t = lim i->inf t[i] A = lim i->inf A[i] P(A) = lim i->inf P(A[i]) Then suddenly, paradoxes based on the cardinality of A and P(A) go away, because those cardinalities diverge!
I'm not sure I know enough about type theory to evaluate this. Although I do know that Russell's original attempts to repair the defect involved type theory (Principia Mathematica uses a form of type theory however in that form one still can't form the set of all sets). I don't think the above works but I don't quite see what's wrong with it. Maybe Sniffnoy or someone else more versed in these matters can comment.
I don't know anything about type theory; when I wrote that I heard it has philosophical problems when applied to set theory, I meant I heard that from you. What the problems might actually be was my own guess...
Huh. Did I say that? I don't know almost anything about type theory. When did I say that?
I'm not deeply familiar with set theory, but cousin_it's formulation looks valid to me. Isn't the powerset of the set of all sets just the set of all sets of sets? (Or equivalently, the predicate X=>Y=>Z=>true.) How would you use that to reconstruct the paradox in a way that couldn't be resolved in the same way?
The powerset of the set of all sets may or may not be the set of all sets (it depends on whether or not you accept atoms in your version of set theory). However, Cantor's theorem shows that for any set B, the power set of B has cardinality strictly larger than B. So if B=P(B) you've got a problem.
If you are talking about things that are set-like, except that they don't satisfy the extensionality axiom, then you just aren't talking about sets. The things you're talking about may be set-like in some respect, but they aren't sets. There are other set-like things that don't satisfy extensionality. For example, two different properties or predicates might have the same extension.
To be clear - Aczel's ZFA and similar systems do satisfy extensionality; they'd hardly be set theories if they didn't. It's just that when you have sets A and B such that A={A} and B={B}, you're going to need stronger tools than extensionality to determine whether they are equal.
Interesting. I'm not familiar with Aczel's system. But is that what cousin_it is talking about doing? That looks like an adjustment to Foundation rather than to Extensionality.
It's both at once. (Though, as I said, you don't throw out extensionality. Actually, that raises an interesting question - could you discard extensionality as an axiom, and just derive it from AFA? I hadn't considered that possibility. Edit: You probably could, there's no obvious reason why you couldn't, but I honestly don't feel like checking the details...) If you just throw out foundation without putting in anything to replace it, you have the possibility of ill-founded sets, but no way to actually construct any. But the thing is, if all you do is say "Non-well-founded sets exist!" without giving any way to actually work with them, then, well, that's not very helpful either. Hence any antifoundational replacement for foundation is going to have to strengthen extensionality if you want the result to be something you want to work with at all.
I think you mean to say is "non-Well-founded sets exist!" since you are talking about the antifoundational case (and even with strong anti-foundation axioms I still have well-founded sets to play with also).
Oops. Fixed.
How do you mean bisimulation in this case? This seems to be a reduction down to decidable predicates, e.g. a Turing machine for each set. Without a type theory, many obvious algorithms will fail to converge.
I'd like to hear more about this. It doesn't sound necessarily crackpottish to me to come up with an alternate set theory: von Neumann and Godel did. How do you avoid contradictions?
Wait, how is NBG set theory relevant to this? NBG is just a conservative extension of ZFC, and only allows something resembling a set of all sets by insisting that this collection is not, in fact, a set. Which after all, it has to in order to avoid Russell's paradox.
Yes, and I'm guessing cousin_it's version of set theory is possibly equivalent to something similar. I'd love to hear more about it.
Well I mean, I imagine it shouldn't be too hard to take ZFA (or similar) and tack proper classes onto it. Logic is not really my thing so I'm not actually familiar with how you show that NBG conservatively extends ZFC. The result would be a bit odd, though, in that classes would act very differently from sets - well, OK, more differently than they already do in NBG...
I don't know the proof either. The other weird thing to note is that even though NBG is a conservative extension of ZFC, some proofs in NBG are much shorter than proofs in ZFC. So in some sense it is only weakly conservative. I don't know if that notion can be made at all more precise. Edit: Followup thought, most interesting conservative extensions are only weakly conservative in some sense. Consider for example finite degree field extensions of Q. If axiomatized these become conservative extensions of Z. (That's essentially why for example we can prove something in the Gaussian integers and know there's a proof in Z).
Isn't "the set of all sets" (SAS) ill-defined? Suppose we consider it to be for some set A (maybe the set of all atoms) the infinite regression of power sets SAS = P(P(P(P....(A)))...) In which case SAS = P(SAS) by Cantor-like arguments? And Russell's paradox goes away?
So, is the set of all sets that aren't members of themselves, a member of itself, or not?
1Jeremy Corney
Every set is also a subset of itself, by definition. From Wikipedia "By definition a set z  is a subset  of a set x, if and only if every element of z is also an element of x"
Insufficient data to answer your question :-) See my reply to Sniffnoy.
Russell's paradox, as usually stated, doesn't actually prove anything, because it's usually given as a statement in English about set theory. I don't know whether Russell originally stated it in mathematical terms, in which case it would prove something. I've read numerous accounts of it, yet never seen a mathematical presentation. Google fails me at present. I don't count a statement of the form "x such that x is not a member of x" as mathematical, because my intuition doesn't want me to talk about sets being members of themselves unless I have a mathematical formalism for sets and set membership for which that works. It's also not happy about the quantification of x in that sentence; it's a recursive quantification. Let's put it this way: Any computer program I have ever written to handle quantification would crash or loop forever if you tried to give it such a statement.
By the way, quick history of Russell's paradox and related matters, with possible application to the original topic. :) Russell first pointed out his namesake paradox in Frege's attempt to axiomatize set theory. So yes, it was a mathematical statement, and, really, it's pretty simple to state it. Why nobody noticed before this paradox before then, I have no idea, but it does seem to be worth noting that nobody noticed it until someone actually attempted to sit down and actually formalize set theory. However, Russell was not the first to notice a paradox in naïve set theory. (Not sure to what extent you can talk about paradoxes in something that hasn't been formalized, but it's pretty clear what's meant, I think.) Cesare Burali-Forti noticed earlier that considering the set of all ordinals leads to a paradox. And yet, despite this, people still continued using naïve set theory until Russell! Part of this may have been that, IIRC, Burali-Forti was convinced that what he found could not actually be a paradox, even though, well, in math, a paradox is always a paradox unless you can knock out one of the suppositions. I have to wonder if perhaps his reaction on finding this may have been along the lines of "...but somebody would have noticed". :)
And note also that even Russell's paradox was not phrased originally in this way. His original phrasing as I understand it rested on taking the set of all sets A and then looking at the cardinality of that set's powerset P(A). Then we have |P(A)| > |A| but P(A) <= A so |P(A)| <= |A| which is a contradiction. This is essentially the same as Russell's paradox when one expands out the details (particularly, the details in the proof that in general a set has cardinality strictly less than its powerset).
Ah, good point. I'd forgotten about that part. IIRC he first noted that and then expanded out the details to see where the problem was.
The problem is, how do you exclude it from working? If you're just working in first-order logic and you've got a "membership" predicate, of course it's a valid sentence. Russell and Whitehead tried to exclude it from working with a theory of types, but, I hear that has philosophical problems. (I admit to not having read their actual axioms or justification for such. I imagine the problem is basically as follows - it's easy enough to be clear about what you mean for finite types, but how do you specify transfinite types in a way that isn't circular?) The modern solution with ZFC is not to bar such statements; with the axiom of foundation, such statements are perfectly sensible, they're just false. Replace it with an antifoundational axiom and they won't always be false. However, in either case - or without picking a case at all - Russell's paradox still holds; allowing there to be sets that are members of themselves, does not allow there to be a set of all such sets. That is always paradoxical. It's not recursive unless you're already working from a framework where there are objects and sets of objects and sets of etc. In the framework of first-order logic, there are just sets, period. Quantification is over the entire set-theoretic universe. No recursion, just universality. In ZFC sets can indeed be classified into this hierarchy, but that's a consequence of the axiom of foundation, not a feature of the logic.
I prefer to not have either foundation or an anti-foundational axiom. (Foundation generally leads to a more intuitive universe with sets sort of being like boxes but anti-foundational axioms lead to more interesting systems). I'm also confused by cousin it's claim. I don't see how bisimulation helps one deal with Russell's paradox but I'd be interested in seeing a sketch of an attempt. As I understand it, if you try to use a notion of bisimilarity rather than extensionality and apply Russell's Paradox, you end up with essentially a set that isn't bisimilar to itself. Which is bad.
It is presented that way to make a point that naive set theory isn't workable. It is presented rigorously in most intro set theory text books. In ZFC or any other standard set of set theory axioms, Russell's paradox ceases to be a paradox and the logic is instead a theorem of the form "For any set A, there exists a set B such B is not an element of A." Well, a standard formalism (again such as ZFC) is perfectly happy talking about sets that recur on themselves this way. Indeed, it is actually difficult to make a system of set theory that doesn't allow you to at least talk about sets like this. I'm curious, do you consider Cantor's diagnolization argument to be too recursive? What about Turing's Halting Theorem?
What encoding scheme would you use to encode arbitrary, possibly infinite, sets in a computer?
I could, worst case, use the encoding scheme you use to write them down on paper when you prove things about them.

Great post Alikorn! I think there are some arguments that are similar to "But somebody would have noticed." that are used to discredit any unusual hypothesis and that I read already several times on LW, they are:

Regarding conspiracy theories:

  1. "If this were true some whistle blower would step forward."
  2. "You are privileging the hypothesis because the prior probability of it is much to low."