Related to: Ugh Fields

Ugh Fields are internal negative reactions that occur before the conscious mind has an opportunity to process the information, often resulting in less than optimal decision making.

We have previously discussed Ugh Fields that involve performing tasks, but as far as I can tell we haven't had any posts on Ugh Fields about ideas.  Ugh Fields towards ideas can be experienced both while trying to weigh the merits of an argument, or after one's opinion has altered.

On Less Wrong, many ideas are accepted as true that, in some places, have negative connotations.  And if someone has an Ugh Field towards an idea because of this, it can be difficult to change this to a neutral or positive position. This can cause problems while trying to think about these ideas rationally.

For example, I grew up in a heavily liberal household.  And because of this, when I was young I had a negative view on libertarianism. This caused problems, to the point where in my early teenage years I didn't weigh someone's economic views as highly just because they identified as a libertarian.  But, once I actually looked into the policies of libertarianism and the results of these policies, my views shifted.  And although my reaction improved over time, I still flinch away when I hear the word "libertarian," despite considering myself one!

And there are many other topics on Less Wrong someone could have this reaction for, including AI, FAI, atheism, transhumanism, cryonics, immortality, alternative diets, optimizing utility for charities, and metaphysics.  An Ugh Field towards any of these ideas can hinder one's ability to update properly on hearing information about it.


Some techniques I have used that have helped include:

  • Mentally correcting myself whenever I notice that I'm flinching away from an Ugh Field.
  • Actively think about why my view should change when I'm far (Which may be supplemented by reminders from an Anki deck).
  • Going through the arguments that convinced me that I should think differently in the first place.
  • Considering myself one of them, e.g. calling myself a libertarian rather than merely saying I support libertarian views.  Caution should be taken with this to prevent too much in-grouping.
  • Getting into a discussion with someone who holds the view I previously held (Essentially a combination of the last two).
  • Trying to imagine positive outcomes as a result of updating in the right direction, or the negative results of not updating.
  • Reading more about the position to normalize it in my brain.

Things I have not done, but might work:

  • Reciting the Litany of Tarski.
  • Writing down a list of ideas I have Ugh Fields, and reminding myself that these are problems (Could also use Anki).

Does anyone else have suggestions?

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 1:48 PM

I think the converse is also true: the LW community has an ugh field around ideas that are generally acceptable and reasoned about elsewhere. For example, the issues related to AI risks. Fear is a powerful ugh field generator.

The original article and usual use of "Ugh Field" (in the link at the top of the post) is summariezed as:

Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.

I agree that LW has Ugh Fields, but I can't see how AI risks is one. There may be fear associated with AI risks here but that is specifically because it is a major topic of discussion here. Fear may impede clear thinking, sure, but this particular case doesn't seem to fit into the notion of Ugh Field.

I think the confusion stems from the definition in the post being much too loose:

Ugh Fields are internal negative reactions that occur before the conscious mind has an opportunity to process the information, often resulting in less than optimal decision making.

If you want to take a look at possible LW Ugh Fields, I'd take a look at user:Will_Newsome's posts.

Ahh, okay. Rather narrow definition. I was thinking more along thoughts associated with fear. Scare people with concept they don't very well understand, offer hope, and over time as they think about one and get scared or think of the other and get comfortable, they develop conditioned associations of the form A=bad, B=good, that can not be removed with logical arguments any more than you can argue a conditioned blink reflex out of someone.

I leave myself open to reading dissenting writings around such "ugh topics". I mostly motivate myself with a smug sense of superiority and the ability to mock their obviously poor reasoning. I don't seek it out, but I have intelligent friends with the occasional stupid philosophical attachment, so I get linked to fairly high-quality articles from a variety of perspectives.

The end result is that I often reinforce "this idea is clearly stupid, since the people defending it can't mount an intelligent defense". Occasionally, however, I go "oh, I never thought about it that way!" and realize that the majority of my objections just got wiped away, at which point the "ugh field" usually goes away and I can re-evaluate the idea.

So, basically, I let myself openly mock ideas I disagree with, because it tricks me in to having an actual discourse, and I trust myself to recognize novel arguments and information well enough to actually integrate them. It helps that I have no objections to mocking an idea I don't understand, so this doesn't impact my social status in cases where conceding the debate would look bad. I've occasionally argued against a point long after accepting it, then just quietly reversed my opinion a few days later to save face. I don't know how helpful this is for other people, but it helps me strike a very nice balance between social and intellectual values.