Suppose there were a cheap way to make nuclear weapons out of common household materials. Knowing this information is not directly harmful to the median individual like yourself. On average, it may even be beneficial to the average person, to know that secret yourself; either because you could directly conquer places, or because you could sell the info to somebody who could. But society as a whole would rather that nobody knew it than that everybody knew it.

For a more realistic example, consider the DNA sequence for smallpox: I'd definitely rather that nobody knew it, than that people could with some effort find out, even though I don't expect being exposed to that quoted DNA information to harm my own thought processes.

'Infohazard', I think, should be used only to refer to the slightly stranger case of information such that the individual themselves would rather not hear it and would say, "Don't tell me that".

A spoiler for a book you're in the middle of reading is the classic example of information which is anomalously harmful to you personally. You might pay a penny not to hear it, if the ordinary course of events would expose you to it. People enclose info about the book's ending inside spoilers, if on Reddit or Discord, not because they mean to keep that secret to benefit themselves and harm you - as is the more usual case of secrecy, in human interactions - but because they're trying to protect you from info that would harm you to know.

The term 'infohazard' has taken on some strange and mystical connotations by being used to refer to 'individually hazardous info'. I am worried about using the same word to also refer to information being alleged to be dangerous in the much more mundane sense of 'collectively destructive info' - info that helps the average individual hearer, but has net negative externalities if lots of people know it.

This was originally posted by me to Facebook at greater length (linkpost goes to original essay).  Best suggestions there (imo) were 'malinfo' and 'sociohazard', and if I had to pick one of those I'd pick 'sociohazard'.

EDIT:  I like shminux's 'outfohazard' even better.

EDIT 2:  My mind seems to automatically edit this to 'exfohazard' so I guess that's what the word should be.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 10:43 AM

We already have a Schelling point for "infohazard": Bostrom's paper. Redefining "infohazard" now is needlessly confusing. (And most of the time I hear "infohazard" it's in the collectively-destructive smallpox-y sense, and as Buck notes this is more important and common.)

If Bostrom's paper is our Schelling point, 'infohazard' encompasses much more than just the collectively-destructive smallpox-y sense.

Here's the definition from the paper.

Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.

'Harm' here does not mean 'net harm'. There's a whole section on 'Adversarial Risks', cases where information can harm one party by benefitting another party:

In competitive situations, one person’s information can cause harm to another even if no intention to cause harm is present. Example:  The rival job applicant knew more and got the job.

ETA: localdeity's comment below points out that it's a pretty bad idea to have a term that colloquially means 'information we should all want suppressed' but technically also means 'information I want suppressed'. This isn't just pointless pedantry.

Yeah, that concept is literally just "harmful info," which takes no more syllables to say than "infohazard," and barely takes more letters to write. Please do not use the specialized term if your actual meaning is captured by the English term, the one which most people would understand immediately.

I kinda agree. I still think Bostrom's "infohazard" is analytically useful. But that's orthogonal. If you think other concepts are more useful, make up new words for them; Bostrom's paper is the Schelling point for "infohazard."

In practice, I'm ok with a broad definition because when I say "writing about that AI deployment is infohazardous" everyone knows what I mean (and in particular that I don't mean the 'adversarial risks' kind).

[-]Buck2y7359

I agree that it's inconvenient that these two concepts are often both referred to with the same word. My opinion is that we should settle on using "infohazard" to refer to the thing you're proposing calling "sociohazard", because it's a more important concept that comes up more often, and you should make up a new word for "knowledge that is personally damaging". I suggest "cognitohazard".

I think you'll have an easier time disambiguating this way than disambiguating in the way you proposed, among other reasons because you're more influential among the people who primarily think of "cognitohazard" when they hear "infohazard".

My personal exposure to the term "infohazard" comes primarily from fiction where it referred to knowledge that harms the knower.  (To give an example I recently encountered:  Worth the Candle.)

My model predicts that getting scholars to collectively switch terminology is hard, but still easier than getting fiction authors to collectively switch terminology.  I don't think there's any action that could plausibly be taken by the LessWrong community that would break the associations that "infohazard" currently has in fiction.

Even if you could magically get all the authors to switch to "cognitohazard", I don't think that would help very much, because "infohazard" is similar enough that someone who isn't previously aware of a formal distinction between them is likely to map them onto the same mental bucket.

If I had godlike powers to dictate what terms people use, I wouldn't use any term containing the word "hazard" to refer to information that is harmless to you but that someone else doesn't want you to know.  This flies in the face of my intuitive sense of how the term "hazard" is commonly used.  That's, like...imagine if some plutocrats were trying to keep most people poor so that they could control them better, and they started referring to money as "finance-hazard" or something; this term would strike me as being obviously an attempt at manipulation.  If the person calling something a "hazard" does not themselves want to be protected from it, then I call BS.

One way to change it might be to convince the writers/editors of the SCP Foundation wiki to clarify definitions in their fiction—that seems to be the source of most modern uses of the term, though it's likely already too late for that.

[-]gjm2y40

To me "cognitohazard" seems like a good term for basilisks and their less exotic brethren -- things that can somehow mess up your thinking when you hear them -- but not for things more like spoilers. I'm not sure "infohazard" is great for that either but it seems less weird to me. (I don't think I would ever refer to a spoiler as either an "infohazard" or a "cognitohazard".)

Separately: Perhaps "infohazard" is, at present, unfixably ambiguous and we should use (say) "cognitohazard" for things that are individually harmful and "sociohazard" for things that are collectively harmful, and "infohazard" not at all.

I find the distinction (even within the personal level) interesting.

"Cognitohazard" to me sounds "this will mess up your thinking".

But quite often, the hazard is that it will mess up your emotional state, and I'd want a different word for that. I mean with a spoiler, this is just a reduction in excitement and engagement. But a lot of online spaces can be far more intense in what they do to your emotions.

I also find it interesting that the harm can be the information itself (no matter how it is presented), or the presentation, or maybe just repetition with minimal variations? E.g. I have difficulties pinpointing the particular information I gain from reading online spaces occupied by people who are violent or depressed that I would characterise as hazardous (there is no truth that they know that I find inherently compelling; I could point to something like "many people are sad" or "many people hate people like me" or "there are many reasons to despair", but saying them now, I do not find them that depressing), yet I find it impossible to spend extensive time in such spaces without my mood tanking. I've always thought brain washing techniques sounded silly, but I wonder now whether the repetition itself eventually does convince your brain that there is something to it. There was that interesting finding with Facebook content moderators who basically spend their whole workday looking at flagged content, and a lot of them... became conspiracy theorists, or paranoid, or started making ever more edgy jokes, their behaviour and thinking started changing. When they would explain the conspiracy theories, they did not have any particular compelling information. They had just read them repeatedly from all sorts of people, referenced as known and obvious, until their brain began to shift.

It's not an in-fohazard, it's an out-fo hazard.

I think this is actually my favorite yet.

[-]Yitz2y3-2

How about "Commuhazard," as in "community hazard"? For some reason I have trouble stating explicitly (which frustrates me, but my intuition has a good track record with this sort of thing), I place very low comparative probability of "outfohazard" becoming a widely used term.

"Communohazard" seems more likely.  Or "communoinfohazard"; I'd be inclined to object to the implied claim that, when we think of the category of things that are hazardous to a community, what comes to mind should be "information that (allegedly) should be suppressed".

For some reason I have trouble stating explicitly (which frustrates me, but my intuition has a good track record with this sort of thing), I place very low comparative probability of "outfohazard" becoming a widely used term.

I agree.  In the context where someone has just said "infohazard" and you say "I think you mean out-fohazard", the meaning is clear and it's an excellent retort; but if someone just said "outfohazard" by itself, I think I'd say "... what?".  Every time someone used the word alone, I'd be confused by what "outfo" is and have to remind myself (or, the first few times, be reminded by someone else or by Google) that it's a play on "info", and I'd resent this... "out-fo-hazard", "out-fo hazard", possibly "outfo hazard" might be better in that respect.

I think “communohazard” sounds a lot better than my suggestion, and is probably the best so far that I’ve seen.

I like how "communohazard" combined "community hazard" and "communicable hazard". (Whereas "commuhazard" sounds like it would occur with terms like "cultural marxism", and hence does not work).

But I agree, not having "info" in the word anymore makes it less intuitively understandable. "Communohazardous info, aka CHI"?

Though honestly, is the jargon actually necessary? Can't we just say "I think this information spreading is bad (for the individual / for the community)", and have it be instantly comprehensible?

This was originally posted by me to Facebook at greater length (linkpost goes to original essay). Best suggestions there (imo) were 'malinfo' and 'sociohazard', and if I had to pick one of those I'd pick 'sociohazard'.

I think "recipe for destruction" was a better option then those, because it's straightforwardly clear from the words what it means.

Paraphrasing of some of the things Jessica says:

  • Bostrom's original definition of "infohazard" covers scenarios of the form "someone would like to suppress this information" in addition to "if you learn this information it will harm you".
  • Scenarios where "someone would like to suppress this information" are much more common than "self-infohazard" scenarios.
  • When someone wants to suppress information (such as a cult leader who's secretly abusive), it is in their interest to make others believe that it's a "self-infohazard" (including with arguments like "They're not ready to learn how I've been abusing them; it would shatter their worldview"), or possibly a "socio-infohazard" (where it's in people's collective interests to not know the leader's crimes—"It would fracture the community", "Having everyone listen to me brings order and unity and you don't want to ruin that", etc.).

I would add that it's probably best if our vocabulary choices don't make it easy for bad actors to make superficially-plausible claims that suppressing the information they want suppressed is good and virtuous.  I would say that the word "infohazard" itself sounds to me (and, I suspect, to the naive layperson) like the "directly hazardous to me if I learn it" sense.

Therefore, it's probably a bad thing if serious people believe that "infohazard" technically means Bostrom's maximally-broad original definition, because that makes it harder for serious people to protest when e.g. someone says "The fact that some people in our organization screwed up X is an infohazard, so we shouldn't publicly mention it".  Accepting that definition is like creating a motte-and-bailey term.  (Incidentally: Learning that someone has done something bad is usually at least slightly unpleasant—more so if it's a partner or leader—and therefore it's ~always possible to make at least a slight case that "knowledge of my bad behavior" is a self-infohazard.)

I would suggest the following rule as an antidote: Anytime someone says the unqualified term "infohazard", it means "information that I want suppressed"—i.e. we should be suspicious of their motives.  If they have good reasons to suppress information, they should have to state them upfront.  More specific terms might be "social infohazard", "existential infohazard", "cognitive infohazard", etc.  I'll also note Eliezer's suggestion of "secret":

We already have a word for information that agent A would rather have B not know, because B's knowledge of it benefits B but harms A; that word is 'secret'.

By the way, I'm kind of weirded out by the idea that we need short terms that mean "information that should be suppressed" (i.e. in terms of Huffman coding, "we should use a short word for it" = "it is very common"), and furthermore that it's rationalists who are trying to come up with such words.  I think it's ultimately for innocent reasons—that Bostrom picked "information hazards" for the title of his paper, and people made the obvious catchy portmanteau; still, I don't want to push the language in that direction.

If you really want to create widespread awareness of the broad definition, the thing to do would be to use the term in all the ways you currently wouldn't.

E.g. "The murderer realized his phone's GPS history posed a significant infohazard, as it could be used to connect him to the crime."

By the way, I'm kind of weirded out by the idea that we need short terms that mean "information that should be suppressed" (i.e. in terms of Huffman coding, "we should use a short word for it" = "it is very common"), and furthermore that it's rationalists who are trying to come up with such words.  I think it's ultimately because Bostrom picked "information hazards" for the title of his paper, and people made the obvious catchy portmanteau; but I don't want to push the language in that direction.

I largely agree with you. Having a richer vocabulary would be helpful for thinking about problems of this theme with more nuance, if the participants used that rich vocabulary accurately and with goodwill. I also think that defining new words to label these nuanced distinctions can be helpful to motivate more sophisticated thinking. But when we try to reason about concrete problems using this terminology and conceptual scheme, we ought to taboo our words and show why a given piece of information is hazardous to some person or group.

I'm skeptical that the use of these short phrases implies that rationalists have overly normalized speech suppression (if that's what you mean by your Huffman coding argument). Copywriters pre-emptively shorten novel words and phrases to make them catchy, or to give them the appearance of colloquialism and popularity. Since the rationalist community is primarily blog-based, I see these shortenings as part of a general trend toward "readability," not as a symptom of rationalists being over-steeped in "infohazard" concepts.

I'm skeptical that the use of these short phrases implies that rationalists have overly normalized speech suppression (if that's what you mean by your Huffman coding argument).

Edited to hopefully clarify.  I do believe that the reasons for it are innocent, but it still feels uncomfortable, and, to the extent that it's under our control, I would like to reduce it.

That sounded like you said it's bad if serious people accept a definition of "infohazard" that allows it to be legitimately used by corrupt people trying to keep their corruption secret for purely selfish reasons, but then in the next paragraph you proposed a definition that allows it to be used that way?

The really bad thing is if corrupt people get a motte-and-bailey, where they get to use the term "infohazard" unchallenged in describing why their corruption should be kept secret, and casual observers assume it means "a thing everyone agrees should be kept secret".  I'm recommending spreading the meme that the bare word "infohazard" carries a strong connotation of "a thing I want kept secret for nefarious reasons and I want to trick you all into going along with it".  I think, if the meme is widely spread, it should fix the issue.

Your earlier comment sounded to me like you were framing the problem as "the word has these connotations for typical people, and the problem is that serious people have a different definition and aren't willing to call out bad actors who are relying on the connotations to carry their arguments."

That framing naturally suggests a solution of "serious people need to either use different definitions or have different standard for when to call people out."

Now it seems like you're framing the problem as "the word's going to be used by corrupt people, and the problem is that typical people assign connotations to the word that make the corrupt person's argument more persuasive."

I dislike the second framing for a couple reasons:

  1. The first framing suggests we need to change the explicit understanding of serious people; the second that we need to change the implicit understanding of typical people.  Of those two, changing the first thing seems massively more feasible to me.
  2. You are evoking scenarios where the bad guy says a word that is understood to mean "spreading this is bad for me."  I think this is an unrealistic scenario, and you should instead be evoking a scenario where the bad guy switches to a word that is still widely understood to mean "spreading this is bad for the collective", but where serious people no longer think that it technically could mean something else.  (The bad guy will switch to whatever word is currently most favorable for them, not stick to a single word while we change the connotations out from under them.)

I don't think the problem is motte-and-bailey per se; to me, that term implies a trick that relies upon a single audience member subconsciously applying different definitions to different parts of the argument.  But it sounds to me like you're describing a problem where one part of the audience is persuaded, because they apply the narrower definition and aren't knowledgeable enough to object, while another part of the audience is NOT persuaded, but fails to object, because they apply the broader definition.  (No single audience member is applying multiple definitions.)

If the second group actually did object, then hypothetically the speaker could turn this into a motte-and-bailey by defending their arguments under the broader definition.  But I don't think that's much of a practical risk, in this case.  To actually execute that motte-and-bailey, you'd need to at some point say something like "spreading this info is bad for me [and therefore it counts as an infohazard]", and I think that sound bite would lose you so many rhetorical points among people-you-could-potentially-trick-with-it that it wouldn't typically be worth it.

I do hope that "spreading the meme that "infohazard" probably means "info I selfishly want to suppress"" will cause serious people to more readily notice and raise objections when someone is using it to gloss over corruption.  I guess I didn't specify that, but I believe that would be the primary means by which it would help in the short term.  So I think we don't actually disagree here?  (In the longer term, I do suspect that either the meme would spread to ordinary people, or the term "infohazard" would fall into disuse.)

Regarding motte-and-bailey—I disagree about the definition.  Quoting one of Scott's essays on the subject:

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you retreat to an obvious, uncontroversial statement, and say that was what you meant all along, so you’re clearly right and they’re silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

It does have an element of conditionally retreating depending on whether you're challenged.  Also, I think the original context is people making bold claims on the internet, which probably means the audience is many people, and there's often a comments section where objections might be made or not made.  The case of persuading a single person to accept a claim via one definition, then telling them the claim implies something else via a different definition—I would use different words for that, perhaps "equivocating" (Wiki agrees, although the article body references motte-and-bailey as one use case) or "Trojan horse argument".

The ideal user of a motte-and-bailey hopes that, most of the time, they won't be challenged; when they are challenged, it does become less convincing.  The motte needs to be something they can at least "fight to a standstill" defending; if it is, then this discourages challengers.  I expect we agree on this.

I would agree that "It's an infohazard because revealing it hurts me" is generally not a good motte.  However, there's still a selection of other justifications to retreat to, some of which might be hard to disprove objectively, which suffices for the "argue to a standstill" purpose.  If necessary, for someone who cares primarily about their social capital and doesn't absolutely need to win the argument, it might even be a motte to say "I wasn't claiming that the downsides of revealing the truth definitely outweigh the values of truth and justice; I just meant that there are significant downsides".

Let's take the example of "It's an infohazard for churchgoers to learn that Priest Bob had molested some children 20 years ago."  The bailey would be "I'm claiming that many of our churchgoers would be heartbroken, would lose faith in God, etc., and since no one is challenging me on this you should update towards thinking the churchgoers are so fragile this is a worthwhile tradeoff."  One motte would be "Well, it would clearly cause emotional distress to lots of people, and we should think carefully before releasing it.  (Definitely we should hesitate long enough for objectors like you to leave the room, so I can repeat my original argument to a more naive audience.)"

(Incidentally, I would admit that people didn't need the word "infohazard" to make arguments like the above.  But having the word would probably make their job easier, give them a layer of plausible deniability.)

I disagree that the word "infohazard" makes it easier to use arguments like the ones in your final example.  If we had a word that was universally acknowledged to mean "information whose dissemination causes communal harm", they could make precisely the same argument using that word, and I don't see how the argument would be weakened.

And...I guess I'm confused about your strategy of spreading your proposed meme to serious people.  If the goal is to provide the serious people a basis upon which to object, this strikes me as a terrible basis; "your word choice implies you are probably corrupt" is an unpersuasive counter-argument.  If the goal is to make the serious people notice at all that the argument is objectionable, then that seems like a fragile and convoluted way of doing that--making people notice that an argument might be flawed, based on easily-changeable word choice, rather than an actual logical flaw.  Maybe I'm still not understanding the proposed mechanism of action?

It's been almost 6 months and I still mostly hear people using "infohazard" the original way. Not sure what's going on here.

For a more realistic example, consider the DNA sequence for smallpox: I'd definitely rather that nobody knew it, than that people could with some effort find out, even though I don't expect being exposed to that quoted DNA information to harm my own thought processes.

 

...is it weird that I saved it in a text file? I'm not entirely sure if I've got the correct sequence tho. Just, when I learned that it is available, I couldn't not do it.

I hadn't seen this post at all until a couple weeks ago. I'd never heard "exfohazard" or similar used. 

Insisting on using a different word seems unnecessary. I see how it can be confusing. I also ran into people confused by this a few years ago, and proposed "cognitohazard" for the "thing that harms the knower" subgenre. That also has not caught on. XD The point is, I'm pro-disambiguating the terms, since they have different implications. But I still believe what I did then, that the original broader meaning of the word "infohazard" is occasionally used in the wild in e.g. biodefense, whereas the "thing that harms the knower" meaning is IME quite uncommon, so I think it seems fair to let Bostrom and the people using it in their work keep "infohazard". Maybe the usage in AI is different.

  1. exfohazard
  2. expohazard(based on exposition)

Based on the latin prefix ex-

IMHO better than outfohazard.

You mention 'classified' in the facebook post and I agree with you that it has shortfalls. I believe, 'confidential', is better, and probably is good enough to refer to the concept. Look at the merriam webster definition #4[1] for 'confidential'. I also think simply saying 'dangerous information' works. I can't think of any other kind of dangerous information besides infohazards and the socially dangerous information you describe. Therefore if information is dangerous but it isn't an infohazard it's dangerous information [other].


  1. https://www.merriam-webster.com/dictionary/confidential ↩︎

I don't have much to say, I just use the word "exfohazard" a lot. Would like a term for things that are not infohazardous in the way the basilisk is alleged, but close enough to the latter to cause distress to unprepared readers.

In order to avoid hypocrisy, one would need to avoid learning about exfohazards as if they were infohazards. Failing to avoid hypocrisy would run the risk of you actually having been rationalizing your own selfishness by deceiving yourself into believing that you were paternalistically monopolizing the information to keep others safe when in reality the information was never exfohazardous to begin with.