# 16

Personal Blog

During discussion in my previous post, when we touched the subject of human statistical majorities, I had a side-thought. If taking the Less Wrong audience as an example, the statistics say that any given participant is strongly likely to be white, male, atheist, and well, just going by general human statistics, probably heterosexual.

But in my actual interaction, I've taken as a rule not to make any assumptions about the other person. Does it mean, I thought, that I reset my prior probabilities, and consciously choose to discard information? Not relying on implicit assumptions seems the socially right thing to do, I thought; but is it rational?

When I discussed it on IRC, this quote by sh struck me as insightful:

I.e. making the guess incorrectly probably causes far more friction than deliberately not making a correct guess you could make.

I came up with the following payoff matrix:

 Bob Has trait X (p = 0.95) Doesn't have trait X (p = 0.05) Alice Acts as if Bob has trait X +1 -100 Acts without assumptions about Bob 0 0

In this case, the second option is strictly preferable. In other words, I don't discard the information, but the repercussions to our social interaction in case of an incorrect guess outweigh the benefit from guessing correctly. And it also matters whether either Alice or Bob is an Asker or a Guesser.

One consequence I can think of is that with a sufficiently low p, or if Bob wouldn't be particularly offended by Alice's incorrect guess, taking the guess would be preferable. Now I wonder if we do that a lot in daily life with issues we don't consider controversial ("hmm, are you from my country/state too?"), and if all the "you're overreacting/too sensitive" complaints come from Alice incorrectly assessing a too low-by-absolute-value negative payoff in (0, 1).

New Comment

Your analysis looks correct to me, but if you think in terms of "causing friction", you're using an assumption of personal responsibility that makes you exploitable in a game-theoretic world.

If person A would benefit from stereotyping person B, and person B doesn't mind being stereotyped, then person C can screw up the chance of that positive-sum interaction happening by precommitting to be very offended if A stereotypes C (or even B). Game-theoretically this is like threatening to burn other people's money if an undesirable event occurs. To determine who is "wrong" and who should change their behavior in this situation, you probably need to look for arguments outside the game. Maybe A is wrong by stereotyping; maybe B is wrong by condoning A's behavior and betraying the group; maybe C is wrong by screwing up others' positive-sum interactions. But if you just assume that any offense is always the "fault" of the one who offends, you're ignoring the reality of people who take offense to further their own goals, consciously or subconsciously.

One possible solution is to use aggregate utility. Would the world contain more utility if stereotyping didn't exist - compared to a world where stereotyping is widespread and no one takes offense? If yes, then you can make a case that stereotyping is analogous to defection in the PD. If no, then you can make a case that taking offense at stereotyping is like defection in the PD - it benefits your group, but hurts the world overall.

Speaking of precommitment to be offended. Would a perfectly rational B be offended at all by an incorrect guess? Granted, humans aren't perfectly rational, nor do they exist in a vacuum.

I'm rarely genuinely offended by stereotyping - I prefer to just politely point out the mistake. Sometimes, though, I prefer to act as if I was offended if it's socially acceptable to be offended in the situation and I believe it's in my interest to further my goals.

Would a perfectly rational B be offended at all by an incorrect guess?

I eventually come out with a contingent "yes" to this question, but it took me a while to get there, and I don't entirely trust my reasoning.

As stated, I wasn't sure how to go about answering that question.

But when A guesses about B, this reveals facts about A's priors with respect to B. So this question seemed isomorphic to "Would B be offended by A believing certain things about B?" which seemed a little more accessible.

But I wasn't exactly sure what "offended" means, at this level of description. The best unpacking I could come up with was that I'm offended by an expressed belief when I subconsciously or instinctively choose to signal my strong rejection of that belief.

If that's true, then I can rephrase the question as "Would a perfectly rational B subconsciously or instinctively choose to signal strong rejection of certain beliefs about B?"

If B has saliently limited conscious processing ability (limited either by speed or capacity) then my answer is a contingent "yes."

For example, a perfectly rational B might reason as follows: "Consider the proposition P1: 'B is willing to cheat'. Within a community that lends weight to my signaling, there is value to my signaling a strong rejection of P1. Expressing offense at P1 signals that rejection. Expressing offense successfully depends on very rapid response; if I am seen as taking time to think about it first, my offense won't signal as effectively. So I do better to not think about it first, but instead instinctively express offense without thinking. In other words, I do better to be offended by the suggestion of P1. OK, let me go implement that."

In this example, B's conscious processing speed forms the salient limitation, but what's important here is the general condition that an unconscious result has value relative to a conscious one.

The specific value provided in this example is less important; there are lots of different equivalent examples.

Depends on B's preferences. If B is selfish or cares about their group disproportionately, then yeah, it may be perfectly rational to take offense.

I'm not an aggregate utilitarian - I believe it's okay to be selfish at the expense of everyone else (within limits, e.g. I wouldn't nuke New York to get an ice cream). But from the viewpoint of total utility, you may or may not be hurting the world overall by adding "friction". I'm kind of on the fence about this - please don't take my grandparent comment as claiming that taking offense is always negative-sum. I was just pointing out the possibility, but to get the right answer you have to imagine the two worlds and calculate.

Hmmm.

I wonder, then, if it's logically coherent to talk about a world where nobody gets offended by stereotyping without changing something fundamental about humans. Are we sure we'll be talking about a human world in that case?

Whaaaaa? Humans started getting offended by gender stereotypes at all only very recently, and not everywhere. If you're an American, you need to realize that your country is far and away the #1 in the world when it comes to taking offense at stereotyping, and most of the world's population considers that notion very weird even today.

I'm a Russian, actually.

Now I'm wondering if I got offended by the above post for a falling-back-to-defaults assumption... :)

Getting offended by conditionals is an idea I can't quite get my head around, but okay. I'm Russian too, by the way. Hiyah :-)

Using that conditional pretty much signalised that your prior on "lucidfox is American" was higher than your prior on any other nationality, which reflects the stereotype of a typical Lesswronger. Speech does not magically become impregnated against offensiveness by including "if". But if you are an evil idiot, you would not understand that :)

Hmm. If I don't even need to do stuff to offend someone - if just possessing accurate priors is sufficient - then I guess I'll go ahead and take the hit. If LW, of all places, begins to punish people for having accurate priors, then as a longtime contributor I feel a sense of duty to introduce a little friction. Throw some sand into the gears. And you should do the same, IMO.

The whole business about stereotypes is about possessing accurate priors, and behaving in a way that reveals them. Which you did.

Of course, I do not suggest that LW should punish people for having accurate priors, and I don't probably disagree with you about offense (or offence?) in general. I actively try to not take offense because of stereotypes. But a lot of users here are doing their best to conceal their priors, e.g. about gender distribution among scientists by diligently balancing the use of male and female characters in their stories. I have no strong opinion about that. I only wanted to emphasise that people take offense from revealing some priors.

The whole business about stereotypes is about possessing accurate priors, and behaving in a way that reveals them. Which you did.

I had thought, perhaps idealistically, that you'd have to actually hurt someone. Like refuse to hire them because they have blue skin. If my behavior isn't hurting anyone, then I object to your calling it a derogatory name ("stereotyping"). This also extends to the case where people choose, consciously or subconsciously, to get offended at my non-hurtful behavior just to teach the world a lesson or something. That's about as well-founded as getting offended at gay people doing their gay thing.

I'm not sure what you think the difference between "people choose, consciously or subconsciously, to get offended" and "people get offended" is.

Regardless: some people get upset when they think I believe, based on their group membership G, that they have an attribute A. Sometimes this happens even when A is more common in G than in the general population.

Perhaps this is unreasonable when A is "is American" and G is "LessWrong".

Perhaps it's also unreasonable when A is "has a criminal record" and G is "American black man."

But the fact remains that people do get upset by this sort of thing..

If we want to establish the explicit social norm on LessWrong that these sorts of assumptions are acceptable, that's our choice, but let's at least try not to be surprised when outsiders are upset by it.

Edit: Actually, on thinking about it, I realize I'm being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.

Edit: Actually, on thinking about it, I realize I'm being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.

It's very interesting that you made this mistake (and I didn't notice it until you pointed it out, and would maybe have made the same).

It seems that the human mind doesn't make a sufficiently good distinction between the two, between "blacks are more likely than non-blacks to have a criminal record" and "blacks are more likely than not to have a criminal record". Maybe by default the non-verbal part of the brain stores the simpler version (the second one), and uses that part to constrain expectations and behavior.

I don't think it's a question of what gets stored so much as what gets activated.

That is, if I have three nodes that "represent" inferring A from G when A is more common in G than in the general population (N1), inferring A from G when A is more common than -A in G (N2), and the word "stereotyping" (N3), and my N1->N3 and N2->N3 links are stronger than N1 and N2's links to any other word, and the N3->N1 link is much stronger than the N3->N2 link, then lexical operations are going to make this sort of mistake... I might start out thinking about N2, decide to talk about it, therefore use the word "stereotyping," which in turn strongly activates N1, which displaces N2.

This is why having distinct words for minor variations in meaning can be awfully useful, sometimes. I'm willing to bet that if we agreed to use different words for N1 and N2, and we had enough conversations about stereotyping to reinforce that agreement, we'd find this error far less tempting, easier to notice, and easier to correct.

See the sequence on A Human's Guide to Words for more on this subject.

Cool! I'd read at least most of these, and the ideas aren't new, but I hadn't realized they were all linked in one place. Thanks for the pointer.

What you strike me is the human tendency to mark one option as the default and the other as a special case.

However, it makes me wonder: if the person making the judgement belongs to the category commonly considered "a special case", will they mentally mark either category as the default? Judging by myself (yes, yes, generalizing from one example), among the intersection of social partitionings that define me, I tend to skip ones where I'm in the majority category (for example, white, or specifically on LW, atheist), and in cases where I'm a minority, treat neither option as the implicit default.

Efficiency of encoding, perhaps?

As I recall, for some categories this turns out, surprisingly, not to be the case. Women are as likely as men to consider a person of unspecified gender male, for example, and blacks are as likely as whites to consider a person of unspecified color white... at least, in some contexts, for some questions, etc. (I would very much expect this to change radically depending on, for example, where the study is being performed; also I would expect it to be more true of implicit association tests than explicit ones.)

I have no citations, though, and could easily be misremembering (or remembering inconclusive studies).

Edit: Actually, on thinking about it, I realize I'm being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.

Strictly speaking you should adjust your probability estimate of the person having attribute A either way. How you then act depends on the consequences of making either error., e.g., the consequences of falsely assuming someone isn't a violent criminal can be more serious then the reverse.

Yes; it would have been more precise to say "inferring a inappropriately high probability of A from G", rather than "inferring A from G."

And you're right that what I do based on my derived probability of A is independent of how I derived that probability, as long as I'm deriving it correctly. (This is related to cousin it's original complaint that inferring unflattering beliefs about people when I don't actually hurt them based on those beliefs ought not be labeled "stereotyping", so in some sense we've closed a loop here.)

Thank you for pointing out the distinction between the two kinds of stereotyping - I didn't see it quite so clearly before.

What does "actually hurt" cover? Would it include not feeling comfortable around some people, and therefore being quietly non-friendly towards them?

This sounds very like you are defining your behaviour as non-hurtful such that anyone objecting to it is then axiomatically in the wrong. If that's not what you meant, do please elaborate.

If there's no substance to their objections beyond "I am offended at this general pattern of behavior", then it sounds like they are in the wrong, no? When a commoner crosses a noble's path without proper kowtowing, the noble may feel very offended indeed, and even have the commoner whipped; but in our enlightened times we know better than to agree with the noble, because the commoner hasn't hurt the noble in any way. That's the moral standard I'm applying here.

Also consider the analogy with gays. What is it that tells you people shouldn't get offended by others' homosexuality? Would you be sympathetic to someone claiming gays should change their behavior in public because he's genuinely hurt by it, or would you consider that person "axiomatically in the wrong"? If the latter, didn't you just apply an instance of the general standard that actually non-hurtful behavior is okay even though some people may complain - and even be sincere in their complaints?

I haven't meant about the word "stereotype" as derogatory, but you are possibly right that it bears negative connotations, so let's use another one. "Accurate priors", maybe?

I thought that people get offended usually because of words, and less often because of deeds. "Offended" associates in my mind with Muslims burnig Danish flags or men in rage after they were called cowards, rather than with unsuccessful job applicants. The applicant may be disappointed, angry, sad, maybe desperate, but I would be surprised if he said he is offended.

But let not this be a dispute about semantics. I suppose there is no real disagreement.

Well, the point of me including an emoticon there was that my feelings on this are confused at best. It likely indicates me being upset with the underlying cause (ideally I would prefer if Less Wrong was a truly international website where the prior for any "user X is nationality Y" is low, and it's something I see worth striving for) than with the specific hypothesis that I'm American, which can be easily discarded before it's used to draw any conclusions.

[-][anonymous]10y 3

A related problem: replacing the majority with the norm.

Most Americans are Christians. Given a random American, he/she is more likely to be Christian than anything else. It may be a safe bet to say Merry Christmas (especially since few people are offended by hearing Merry Christmas even if they're not Christian.) So far, that's just reacting rationally to the fact that Christians are a majority.

But it starts to get unsettling when the majority is regarded as the norm -- when people refer to the United States as "a Christian nation," for instance, with a normative rather than a statistical implication. There's a difference in thinking "Most Americans are Christian, but some are not," and thinking "Americans are Christian. (Except for a few aberrations.)" The latter has the connotation that non-Christians are less American.

You can apply this to all kinds of majority/minority things. "Most people are straight, but some are gay" as opposed to "People are straight. Except for some aberrations." "Most mathematicians are men, but some are women" as opposed to "Mathematicians are men. Except for some aberrations." "Many cultures share a similar standard of beauty, but there are some differences" as opposed to "There is one standard of beauty. Except for aberrations."

People are known to have a bias of rounding up high probabilities (treating 90% as practically certain) and rounding down low probabilities (treating 10% as practically impossible.) It's possible that this has an effect on the way we think about minority populations -- we mentally approximate a population that's 95% A and 5% B as "basically" 100% A, and we don't always distinguish in our intuitions between a 5% and a 0.05% population.

Moral of the story: it may be rational to assume that a given person in a group is a member of the majority, but remember to correct for your tendency to slip over the edge from "majority" to "normal" or "standard".

Scope insensitivity could also be a factor here.

In absolute terms, 5% of the United States' population is about 15 million. That's 1.5 times the population of Belgium, Portugal, etc., and only 65 out of 224 countries have a population higher than that.

[-][anonymous]10y 1

I don't detect a difference between the two universes being described by

• "most people are straight and some are gay" and
• "people are straight, except for some aberrations"

except that all things being equal, I would suspect that who uttered the second phrase was more likely to disapprove of homosexuality than who uttered the first phrase. But is reality being described any less accurately by one of these two phrases? How would we go about discovering which phrase was more accurate?

But is reality being described any less accurately by one of these two phrases?

When comparing two predictions, the better prediction is the one that leads to less surprise.* That means that oftentimes you may treat false positives as much less important than false negatives, or vice versa. To me, the difference between the two is which error they favor- the first is likely to overestimate the chance someone is gay, whereas the second is likely to underestimate it. Given that the damage done by a wrong guess is asymmetric, which error you favor should likewise be asymmetric.

*I say this instead of "is right more often" because when it's wrong in a spectacular way that should be counted multiple times. If you say "well, 5/6ths of people haven't been sexually assaulted, so I can make rape jokes and be ok 5/6ths of the time!" then you are cruelly underestimating the damage done by making a rape joke to a rape survivor. When you count it in terms of surprise, you get the better result of "always assume someone could be a rape survivor."

Agreed with what SarahC says, but will add to it that your suspicions about the speaker ("that who uttered the second phrase was more likely to disapprove of homosexuality than who uttered the first phrase") are not irrelevant.

That is, if the speaker doesn't disapprove of homosexuality, then the second phrase is conveying misleading information about the speaker, who is real. In this case, yes, reality is being described less accurately by the second phrase.

By the same token, if the speaker does disapprove, then reality is being described less accurately (along this axis) by the first phrase.

Also, it's worth asking why you conclude what you do about the speaker. It seems likely to me that it's not an idiosyncrasy of yours, but rather that you are responding to connotations of the word "aberration" which are communicated by the second phrase... specifically connotations involving, not only the statistical likelihoods, but the perceived social value of people who are/aren't straight.

One could therefore determine which phrase was more accurate in a particular society by looking at how people's value to that society varies with their orientation.

Something similar might be true about perceived moral value, but talking about moral value as part of reality is more problematic.

[-][anonymous]10y 0

Also, it's worth asking why you conclude what you do about the speaker. It seems likely to me that it's not an idiosyncrasy of yours, but rather that you are responding to connotations of the word "aberration" which are communicated by the second phrase... specifically connotations involving, not only the statistical likelihoods, but the perceived social value of people who are/aren't straight.

I think this is only due to the fact that we're both aware of political battles over homosexuality. I don't read any disapproval of six-fingered people into SarahC's comment below.

Are you suggesting that people with different political opinions should use different language to describe the same reality, or merely that they do?

Merely that they do.

[-][anonymous]10y 0

3-8% of Americans are gay (more like 5% in the UK.) That's a true statement. Guessing that an arbitrary person is straight is perfectly kosher, from a Bayesian perspective.

Here's the thing. Most of us would say that being left-handed is a minority trait, while being six-fingered is an anomaly or aberration. About 15% of people are left-handed; about 0.2% of people are six-fingered. Take that as a benchmark. Then being gay is more like being left-handed than it is like being six-fingered.

And, at 20-30%, women scientists should definitely belong in the "left-handed" rather than "six-fingered" category.

If you start thinking of a sizable minority as though it's as rare and strange as a very small minority, then you're making a mistake.

3-8% of Americans are gay (more like 5% in the UK.)

Isn't 5% just 3-8% that forgot to state its error margins?

[-][anonymous]10y 3

You're literally saying it's a fuzzy measure of magnitude, like the difference between "big" and "huge"? That makes the stakes seem pretty low. Why quibble over them?

How do you feel about six-fingered scientists?

[-][anonymous]10y 0

Yeah, it's a fuzzy measure of magnitude. I was trying to quantify why a stereotype can be wrong (in addition to just bothering some people) and I think that what makes stereotypes actually incorrect is the human tendency to approximate "most" by "all."

In Kahneman and Tversky's prospect theory, there is evidence that people do not react to the differences between small probabilities. It's conceivable that sometimes people treat a small minority as though it were a tiny minority, virtually non-existent. (On the other hand, there's some evidence that people overestimate small probabilities, which makes this argument weaker. So I take it back.)

The other way stereotyping can be a mistake has to do with the conjunction fallacy. Perhaps most X's are A, most X's are B, and most X's are C. It does not follow that most X's are A and B and C. Something that is A and B and C is a "most representative" element, but most X's are not "representative."

This is the old platitude that "there is no typical student at our school." It would be truer to say that the most typical students are usually rare. But people will assume that a student from that school is like that rare "typical student." This is a form of stereotyping which is actually inaccurate. (As opposed to "offensive but accurate," which is what many people claim stereotypes to be.)

A third type of inaccurate stereotyping is mistaking P(B|A) for P(A|B). Most criminals are men, but most men are not criminals.

[-][anonymous]10y 0

These are all good points. Given that 20-30% of scientists are women, it's misleading to say "scientists are normally men" without quantifying "normally." And though most users of this website are americans, and men, and hetero, and college-educated, possibly it is not normal for them to be all at once (I could have picked better examples). But I don't like the idea of people scoring less-parochial-than-thou points off of each other through trivial mistakes along these lines. Maybe that doesn't happen.

A third type of inaccurate stereotyping is mistaking P(B|A) for P(A|B). Most criminals are men, but most men are not criminals.

The usual emphasis on this website is the close relationship between these probabilities, and its important consequences.

This is a very common mistakeâ€”forgetting that probabilities never map to actions (including the action of revealing your beliefs) without a utility function in between. But strangely, I don't remember any posts that specifically address it.

A related thought about the act of revealing your beliefs: what expected utility would you assign to shouting "The Emperor has no clothes!" when the Emperor, indeed, has no clothes?

If you're selfish, you would probably refrain from it under fear of punishment.

But even if you care about the overall utility of the society rather than your personal utility, revealing the truth about the Emperor's clothes could cause rebellions and anarchy, or collapse of whole markets and fields of discourse centered about His Majesty's supposed sophisticated attire.

Does this mean that we can't factor politics completely out of rationality discussions, even when dealing with facts that are objectively and unambiguously true? In "Three Worlds Collide", for example, there is a backstory element where scientists chose to suppress a scientific discovery in the fear of disastrous social effects if it was made public.

A related thought about the act of revealing your beliefs: what expected utility would you assign to shouting "The Emperor has no clothes!" when the Emperor, indeed, has no clothes?

Maybe I'm big on virtue ethics and consider always striving to speak truth virtuous. What if I think me or society "living a lie" is something bad? People have been willing to die painfully and ferment social unrest because of what they consider truth since ... forever, so I'm not even sure its that exotic a mindset.

Related trivia: in Australia, it is or used to be polite to assume an American-sounding accent meant a Canadian unless and until they said otherwise (or were wearing a great big flag or similar), despite the larger proportion of American tourists over Canadian.

I've heard that this is because Canadians tend to get angry when people assume they're American, but I have to wonder if Americans wouldn't be just as likely to get angry. People just had more available examples of Canadians getting angry because they would tend to assume American sounding accents meant American.

I haven't seen any get angry at being mistaken for Americans. Just sorrowful.

I think it's a polite presumption of ability, because Canadians are just like Australians from the anti-Earth on the opposite side of the sun, where it's cold instead of hot. They also understand humour, not just humor. Canadians are Commonwealth, Americans aren't.

New Zealanders are a closer equivalent to Canadians in my book: always mistaken for their more numerous neighbours. From now on in cases where I can't be very sure, I'm assuming that North American English accents are Canadian and Antipodean English accents are New Zealander. While I'm at it, Germanic accents are Swedish, Scandinavian accents are Norwegian, and so on around the world.

Where do you live? I've been amazed as an Australian in London to be taken as a South African. WHAT. My South African friends have been just as amazed my accent could be taken as that.

(insert paragraph about accents, inferential distance, cladistics applied to accent analysis, spotting an accent in a language not your native one, etc)

Cambridge, MA right now, but I'm a native speaker of Hiberno-English. Given more than a few sentences it is usually easy for me to distinguish between South African and Australian/New Zealander. I'm not sure if I regard the three as a cluster but there is definitely some kind of similarity, at least to Irish and UK ears.

I'm not exposed to enough New Zealander to be sure of the distinction between it and Australian, and I think there's significant overlap especially among careful/educated/urban/middle-class speakers.

For reference I can distinguish and name about 5 North American accents (if they're strong enough), about 10 UK ones, and about 12 Irish ones, maybe more.

New Zealanders are a closer equivalent to Canadians in my book: always mistaken for their more numerous neighbours.

And in both cases the confusion lasts only until they speak one a keyword - 'six' or 'out' respectively!