I think actual infohazardous information is fairly rare. Far more common is a fork: you have some idea or statement, you don't know whether it's true or false (typically leaning false), and you kow that either it's false or it's infohazardous. Examples include unvalidated insights about how to build dangerous technologies, and most acausal trade/acausal blackmail scenarios. Phrased slightly differently: "infohazardous if true".

If something is wrong/false, it's at least mildly bad to spread/talk about it. (With some exceptions; wrong ideas can sometimes inspire better ones, maybe you want fake nuclear weapon designs to trip up would-be designers, etc). And if something is infohazardous, it's bad to spread/talk about it, for an entirely different reason. Taken together, these form a disjunctive argument for not spreading the information.

I think this trips people up when they see how others relate to things that are infohazardous-if-true. When something is infohazardous-if-true (but probably false), peopple bias towards treating it as actually-infohazardous; after all,if it's false, there's not much upside in spreading bullshit. Other people seeing this get confused, and think it's actually infohazardous, or think it isn't but that the first person thinks it is (and therefore thinks the first person is foolish).

I think this is pretty easily fixed with a slight terminology tweak: simply call thinks "infohazardous if true" rather than "infohazardous" (adjective form), and call them "fork hazards" rather that "infohazards" (noun form). This clarifies that you only believe the conditional, and not the underlying statement.

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 6:25 PM

“fork hazard” is very easy to confuse with other types of hazards and would occupy a lot of potential-word space for its relative utility. May I suggest something like “conditional infohazard”, elliptical form “condinfohazard”?

I like conditional infohazard. I also think "Infohazard if true (but probably false)" is actually just not that long and it may often be best to just say the whole thing.

How about a “Schrodinger’s Infohazard?”

I agree that it's not that long phonetically, but it's longer in tokens, and I think anything with a word boundary that leads into a phrase may get cut at that boundary in practice and leave us back where we started—put from the other side, the point of having a new wording is to create and encourage the bundling effect. More specifically:

  1. It seems like the most salient reader-perspective difference between “infohazard” and the concept being proposed is “don't make the passive inference that we are in the world where this is an active hazard”, and blocking a passive inference wants to be early in word order (in English).
  2. Many statements can be reasonably discussed as their negations. Further, many infohazardous ideas center around a concept or construction moreso than a statement: an idea that could be applied to a process to make it more dangerous, say, or a previously hidden phenomenon that will destabilize harmfully if people take actions based on knowledge of it. “if true” wants a more precise referent as worded, whereas “conditional” or equivalent is robust to all this by both allowing and forcing reconstruction of the polarity, which I think is usually going to be unambiguous given the specific information. (Though I now notice many cases are separately fixable by replacing “true” with the slightly more general “correct”.)
  3. If you want to be more specific, you can add words to the beginning: “truth-conditional” or “falsity-conditional”. Those seem likely to erode to the nonspecific form before eroding back to “infohazard” bare.

This is independent of whether it's worth smithing the concept boundary more first. It's possible for instance that just treating “infohazard” as referring to the broader set is better and leaving “true infohazard” or “verified infohazard” as the opposite subset is better, especially since when discussing infohazards specifically, having the easiest means of reference reveal less about the thing itself is good by default. However, that may not be feasible if people are indeed already inferring truth-value from hazard-description—which is a good question for another comment, come to think of it.

Sometimes something can be infohazardous even if it's not completely true. Even though the northwest passage didn't really exist, it inspired many European expeditions to find it. There's a lot of hype about AI right now, and I think the idea for a cool new capabilities idea (even if it turns out not to work well) can also do harm by inspiring people try similar things. 

But even the failed attempts at discovering the northwest passage did lead to better mapping of the area, and other benefits so it's not clear if it was net negative at all for society.

It certainly was infohazardous to the people who funded the expeditions and got poor return for their investment. 

I would consider the hazard to be to the agent not to society, though I can certainly imagine information that hurts an individual, but benefits somebody else.

How do you know what their evaluation of their investment was?

Thinking about it more, I suppose I don't know, perhaps they were perfectly happy.

However, in my experience, when you set out to find a thing and fail to find it that often leads to dissatisfaction. My expectation / rule of thumb for this is "People don't often hunt for things they don't want for some reason".

Oftentimes in human affairs the stated reason for a decision is not the true reason. Especially for speculative investments, there are usually multiple motivating reasons.

[-]gjm1y52

It's maybe worth saying explicitly that the only thing a lot of people think they know about Less Wrong is an example of the "infohazardous if true" phenomenon. (Which I'm sure was in Jim's mind when he posted this.)

I think there are subtypes of infohazard, and this has been known for quite a long time.  Bostrom's paper (https://nickbostrom.com/information-hazards.pdf) is only 12 years old, I guess, but that seems like forever.

There are a LOT of infohazards that are not only hazardous if true.  There's a ton of harm in deliberate misinformation, and some pain caused by possibilities that are unpleasant to consider, even if it's acknowledged they may not occur.  Roko's Basilisk (https://www.lesswrong.com/tag/rokos-basilisk) is an example from our own group.

edit: I further think that un-anchored requests on LW for unstated targets to change their word choices are unlikely to have much impact.  It may be that you're putting this here so you can reference it when you call out uses that seem confusing, in which case I look forward to seeing the reaction.

I read this as an experimental proposal for improvement, not an actively confirmed request for change, FWIW.

"Potential recipe for destruction"?

Do I understand correctly from your third paragraph that this is based on existing concrete observations of people getting confused by making an inference from the description of something as an infohazard to a connected truth value not intended by the producer of the description? Would it be reasonable to ask in what contexts you've seen this, how common it seems to be, or what follow-on consequences were observed?

I've seen it happen with Roko's Basilisk (in both directions: falsely inferring that the basilisk works as-described, and falsely inferring that the person is dumb for thinking that it works as-described). I've seen it happen with AGI architecture ideas (falsely inferring that someone is too credulous about AGI architecture ideas, which nearly always turn out to not work).