Related to: Priming and Contamination

Psychologists define "priming" as the ability of a stimulus to activate the brain in such a way as to affect responses to later stimuli. If that doesn't sound sufficiently ominous, feel free to re-word it as "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

For example, let's say you walk into a room and notice a briefcase in the corner. Your brain is now the proud owner of the activated concept "briefcase". It is "primed" to think about briefcases, and by extension about offices, business, competition, and ambition. For the next few minutes, you will shift ever so slightly towards perceiving all social interactions as competitive, and towards behaving competitively yourself. These slight shifts will be large enough to be measured by, for example, how much money you offer during the Ultimatum Game. If that sounds too much like some sort of weird New Age sympathetic magic to believe, all I can say is Kay, Wheeler, Bargh, and Ross, 2004.1

We've been discussing the costs and benefits of Santa Claus recently. Well, here's one benefit: show Dutch children an image of St. Nicholas' hat, and they'll be more likely to share candy with others. Why? The researchers hypothesize that the hat activates the concept of St. Nicholas, and St. Nicholas activates an idealized concept of sharing and giving. The child is now primed to view sharing positively. Of course, the same effect can be used for evil. In the same study, kids shown the Toys 'R' Us logo refused to share their precious candy with anyone.

But this effect is limited to a few psych laboratories, right? It hasn't done anything like, you know, determine the outcome of a bunch of major elections?



I am aware of two good studies on the effect of priming in politics. In the first, subjects were subliminally2 primed with either alphanumeric combinations that recalled the 9/11 WTC attacks (ie "911" or "WTC"), or random alphanumeric combinations. Then they were asked to rate the Bush administration's policies. Those who saw the random strings rated Bush at an unenthusiastic 42% (2.1/5). Those who were primed to be thinking about the War on Terror gave him an astounding 75% (3.75/5). This dramatic a change, even though none of them could consciously recall seeing terrorism-related stimuli.

In the second study, scientists analyzed data from the 2000 election in Arizona, and found that polling location had a moderate effect on voting results. That is, people who voted in a school were more likely to support education-friendly policies, people who voted in a church were more likely to support socially conservative policies, et cetera. The effect seems to have shifted results by about three percentage points. Think about all the elections that were won or lost by less than three percent...

Objection: correlation is not causation! Religious people probably live closer to churches, and are more likely to know where their local church is, and so on. So the scientists performed an impressive battery of regression analyses and adjustments on their data. Same response.

Objection: maybe their adjustments weren't good enough! The same scientists then called voters into their laboratory, showed them pictures of buildings, and asked them to cast a mock vote on the education initiatives. Voters who saw pictures of schools were more likely to vote yes on the pro-education initiatives than voters who saw control buildings.

What techniques do these studies suggest for rationalists? I'm tempted to say the optimal technique is to never leave your room, but there are still a few less extreme things you can do. First, avoid exposure to any salient stimuli in the few minutes before making an important decision. Everyone knows about the 9-11 terrorist attacks, but the War on Terror only hijacked the decision-making process when the subjects were exposed to the related stimuli directly before performing the rating task3.

Second, try to make decisions in a neutral environment and then stick to them. The easiest way to avoid having your vote hijacked by the location of your polling place is to decide how to vote while you're at home, and then stick to that decision unless you have some amazing revelation on your way to the voting booth. Instead of never leaving your room, you can make decisions in your room and then carry them out later in the stimulus-laden world.

I can't help but think of the long tradition of master rationalists "blanking their mind" to make an important decision. Jeffreyssai's brain "carefully put in idle" as he descends to a bare white room to stage his crisis of faith. Anasûrimbor Kellhus withdrawing into himself and entering a probability trance before he finds the Shortest Path. Your grandmother telling you to "sleep on it" before you make an important life choice.

Whether or not you try anything as formal as that, waiting a few minutes in a stimulus-free environment before a big decision might be a good idea.

 

Footnotes

1: I bet that sympathetic magic probably does have strong placebo-type effects for exactly these reasons, though.

2: Priming is one of the phenomena behind all the hype about subliminal advertising and other subliminal effects. The bad news is that it's real: a picture of popcorn flashed subliminally on a movie screen can make you think of popcorn. The good news is that it's not particularly dangerous: your thoughts of popcorn aren't any stronger or any different than they'd be if you just saw a normal picture of popcorn.

3: The obvious objection is that if you're evaluating George Bush, it would be very strange if you didn't think of the 9-11 terror attacks yourself in the course of the evaluation. I haven't seen any research addressing this possibility, but maybe hearing an external reference to it outside the context of your own thought processes is a stronger activation than the one you would get by coming up with the idea yourself.

New Comment
65 comments, sorted by Click to highlight new comments since: Today at 11:49 AM

Whenever a decision seems very close, I flip a coin to decide, and then I check to see if I wish the coin had gone the other way. If so then I go against the coin.

I might sometimes be 'agreeing' with the coin because I'm primed by its outcome, but overall I find it useful for saving time. I rarely regret decisions made this way.

"Whenever you're called on to make up your mind,
and you're hampered by not having any,
the best way to solve the dilemma, you'll find,
is simply by spinning a penny.
No -- not so that chance shall decide the affair
while you're passively standing there moping;
but the moment the penny is up in the air,
you suddenly know what you're hoping."

That's from Piet Hein, a poet (and mathematician) I recommend. But I'm afraid I don't see the link to priming.

That's awesome, thank you!

Priming: In trying to explain myself, I think I've found I'm wrong: If the coin comes up heads and tells me to go through the blue door. I'll say to myself 'ok I'm going through the blue door.' in order to gauge my own reaction to that. I think sometimes I'll be biased to overlook problems with the blue door merely because that's the way I'm already mentally heading. On writing this out I see how it's different than priming.

I've tried this in the past, but now I find I can tell how I'd feel without flipping the coin. If I'm unsure about a decision, flipping the coin no longer helps - it just brings to mind what I don't like about whichever option comes up.

I found, that for me, the same thing works well without a coin. If I am ambivalent on decisions, I just pick one and if I instantly have a feeling that I should have gone the other way, I will switch. The problem with the algorithm is that it when the choices are actually close to equivalent, it takes a bit of strength of will not to repeat the process ad nauseam.

This sounds like using the observation of your spontaneous rationalization to figure out what you think is actually true, mastering rationalization to point the way to the right answer. Truly bizarre.

If I wish the coin had gone the other way, where's the rationalization?

If the coin agrees with your hidden opinion, you agree with the coin, because the coin was right. If the coin disagrees, you disagree with the coin, because it came up wrong side. Far from crystal-clear analogy, but it feels to me that way.

Another try: you focus the uncertainty in abstraction attached to a coin, trying to feel your decision in form of your concrete attitude to this object. Where before you had a thousand currents of value and evidence, now you focus them on a single clear-cut abstraction of value, before you actually make a decision. The focus isn't constrained by additional ritual, like writing down your decision and accompanying explanation, it's pure abstraction extracted directly from your mind.

Exactly. The point is to elicit the hidden opinion, which is presumed to be "good enough".

I expect the result of this experiment to depend on which side the coin actually came up.

It certainly will sometimes: I suspect that sometimes both options are acceptable to me, and neither will feel like a great loss if not followed; in this case I will likely end up following the coin. Or I 'hiddenly' dread both options, or would feel either as loss, in which case I will recoil against the choice of the coin (and possibly recoil again against the other option as well, leaving back where I started.)

But I generally use this when I am otherwise indecisive, where further analysis is more trouble than it's worth. So even when the randomness leads me astray, it doesn't cost me much.

My ex used to privately assign two menu items to the numbers one and two, and then ask me aloud to pick one or two, then see how she felt about my response.

I ask for any integer, and determine it's value modulo 2, or modulo 3 if I have an extra option.

Sorry for intruding on an very old post, but checking 'peoplerandom' integers modulo 2 is worse than flipping a coin - when asked for a random number, people tend to choose odd numbers more often than even numbers, and prime numbers more often than non-prime numbers.

[comment deleted]

[This comment is no longer endorsed by its author]Reply

The second is also implied by the first, if "primes more often than non-primes" means "out of proportion with how many primes there are" rather than "more than 50% of the time", and I think it would be equally interesting to look at whether [odd] primes are more likely to be chosen than odd non-primes.

I wonder if the real issue is "that the person can/can't recall (part of) the factorization offhand" - that would make sense if people avoid numbers that "feel round" - something with an obscure factorization like 51 [3*17] might be more likely to be chosen than even numbers, multiples of 11, numbers that appear on a multiplication table (so, 3 times numbers 10 or less).

There is an excellent example of "priming" the mind here.

The idea is that specific prior knowledge drastically changes the way we process new information. You listen to a sine-wave modulated recording that is initially unintelligible. You then listen to the original recording. You are now primed. Listen again to the modulated recording and suddenly the previously unintelligible recording is clear as day.

I first listened to all of the samples on December 8th, when the link was posted on kottke.org. If I'm not mistaken that means it's been exactly 100 days since I last heard, or even thought about, these recordings. I listened to them again just a few minutes ago and understood every single one of them perfectly.

I can't decide if this is impressive or terrifying.

Thanks for the link, very interesting indeed.

In my case, though,I could hear a few words the first time I listened to the sine-wave modulated version. It became much clearer after listening to the primer, though.

Sadly, once you've heard the primer, you can't really go back to hearing it the way you heard it the first time, so you can't compare back to back. It's a bit like "hidden" messages in songs; once you hear them, it very hard to revert back to hearing the original lyrics.

Hmm.

I found that with all of those I could make out at least some, and in some cases all, of the words in the sine-wave versions without the "priming" original recording. The very first example on that page was pretty much perfectly clear to me on the first listening; others were more work.

Then, for the ones where I hadn't been able to make out all the words in the sine-wave version, I listened to the "primer" and tried again with the sine waves. In each case, I found that I could then recognize the words I'd found unclear before, but I wouldn't say they were "clear as day" or anything like; more like "well, OK, I suppose it could be interpreted that way".

The effect of priming, for me, therefore appears to be very small.

I tried the first pair on my wife, and her response appears to be the canonical one. Obviously I'm just strange. Anyone else have the same experience as me?

Do you own the Aphex Twin album "The Richard D James Album"? Because they all made perfect sense to me at first listen, and immediately reminded me of the brief speech interludes on that album.

No, and I've never knowingly heard anything by Aphex Twin.

Of the five recordings on that page I was able to figure out three without listening to the clear speech.

Frightening and insightful.

The thing about trying to minimise "priming" stimuli is that it seems to let little stimuli have big impacts, and, if you're using a private or familiar collection of stimuli, it could cause unhealthy feedback loops. You're never going to erase all the stimuli, and whichever ones get through could have a big impact on your decision making.

Imagine, for instance, you're trying to make up your mind about Turkey joining the EU. Would you rather be in your room, where, let's say, you've got a big, imposing English flag and a map of Europe, reminding you of Turkey's geographical separation, or would you rather be at a café, where you can smell the delightful scents of the kebab shop across the street and taste your Turkish coffee just as well as see a woman in Niqaab, which may give you a bit of culture shock, or hear a brass band playing patriotic music?

So, controlling and reducing the number of stimuli might actually be worse than throwing yourself out into the world, where you get many, uncontrollable, conflicting stimuli. This suggests one way parochial attitudes develop: people make decisions based on local stimuli, which cause them to protect and reinforce the values and judgement associated with those stimuli, which causes them to introduce more, similar stimuli.

Since this (now ten years old) post was written, psychology underwent a replication crisis, and priming has become something of a poster child for "things that sounded cool but failed to replicate".

Semi-relatedly, we on the Less Wrong team have been playing with a recommendation engine which suggests old posts, and it recommended this to me. Since this post didn't age well, I'm setting the "exclude from recommendations" flag on it.

Why do you expect the decision made in a default state to be more accurate than if it's modified by an unknown factor? If you don't expect to be primed in any particular direction, you may as well be swayed towards the right choice. Priming occurs even when you think of something yourself, so the advice could as well be extended to suggesting not to think about anything, especially about things relevant to the decision at hand, since those things are likely to influence the decision most. When any shadow can influence the outcome, your decision is probably no good to start with, and adding some unexpected noise won't hurt that much.

Why do you expect the decision made in a default state to be more accurate than if it's modified by an unknown factor? If you don't expect to be primed in any particular direction, you may as well be swayed towards the right choice.

Besides Yvain's point about noise, I would point out that we live in a society that systematically seeks to distort our decisions toward non-optimal decisions. I refer to the >$150 billion advertising industry.

I certainly don't believe that a supermarket's many careful product placements and signs and illustrations are priming me to shop more rationally than I would without those stimuli.

I'm interpreting priming as like random noise, since we don't know which direction it will move my estimate. Assuming my rational decision process is better than picking a result at random, adding random noise to it will on average lower its effectiveness.

See also: footnote 3. Since the people not primed by being in a school came to a different result, and since it seems they must have thought about schools at least once when deciding about a school funding initiative, it seems likely that there's something about an external stimulus that's different from an internal one.

I'm interpreting priming as like random noise, since we don't know which direction it will move my estimate. Assuming my rational decision process is better than picking a result at random, adding random noise to it will on average lower its effectiveness.

If the decision's important enough to be worth some trouble, you could think about it in many different contexts, or perhaps with an intentionally varied set of primed stimuli, and seek a sort of sum over your impressions. This seems plausibly more reliable than either "avoid all outside influences" (and hope your uninfluenced decisions are best) or "expose yourself to particular random priming" (and hope that a randomly influenced decision is best or harmless). This is probably a stronger or more complex effect than "priming", but I know I often see usefully different things about an idea when I travel, or when I show it to someone else and suddenly imagine how it might look to them.

The average your guesses method, combined with this post on priming, suggests (as AnnaSolomon says), an "average your primed decisions" procedure.

Rather than "avoiding" your influences, seek out priming conditions, and write down your tentative decision in each case. Then average your decisions.

It seems like an absurd strategy to me, to deliberately add random noise to your decision making process, and then try to filter that noise with an average. Maybe if the random priming approximates gathering evidence closely enough to compensate for the incompletely filtered noise, it could work. But if priming just causes you to irrationally add more weight to evidence you already considered, it is not helpful.

On the other hand, if random priming leads you to have an insight that you could analyze, that could help. Even then, you should wonder why your other strategies to achieve insights are Worse Than Random.

This seems plausibly more reliable than either "avoid all outside influences" (and >hope your uninfluenced decisions are best) or "expose yourself to particular >random priming" (and hope that a randomly influenced decision is best or >harmless).

Given that we don't seem to have much idea what input is likely to prime us to think about what (if I see the letters 'ABC', do I think about US broadcasters, childhood education or matrix multiplication?), it seems unlikely that 'Avoid all outside influences' is a plausible strategy. Is there really going to be nothing in my room which affects my judgment on matters of importance? Indeed, even if I lock myself in a dark room, maybe this primes me to think about ghost stories, or astronomy, or photography? I think wherever you are, you're probably subjecting yourself to some priming effects.

Given that it's pretty hard to avoid all stimuli, I think the 'average your guesses' technique might well be necessary - think about your decision in as many contexts as possible and average your conclusions - if not, you'll never know what influences are lurking in your supposedly neutral environment.

[-][anonymous]15y10

Particularly true when our decisions aren't part of our 'cached thoughts'

[-][anonymous]15y10

When any shadow can influence the outcome, your decision is probably no good to start with, and adding some unexpected noise won't hurt that much.

I doubt that.

Can we use this to our advantage? Maybe tattoo "Less Wrong" on one's palm and look at it when in distress?

That's a very interesting idea. Again, it reminds me of certain forms of magic - for example, of magic practioners who used to surround themselves with imagery of the God of Music before a music recital on the theory it would help them play better. I wish there were studies as to the effectiveness of that, but it's kind of hard to get funding to study magic. I suspect that saying a prayer to God for help resisting temptation may work in the same way - you're activating your concept of God and with it the entire religious system of morality and self-control. In the same way, giving yourself a reminder of rationalism or even reciting one of those rationalist litanies might make the stuff you learn here more salient.

Tell you what - tattoo "Less Wrong" on your palm and post a picture on here, and I'll upvote ten of your comments. Ten karma points! Think about it! That would totally be worth it!

A friend of mine is interested in chaos magic which, from what he has told me, is essentially a collection of psychological tricks to perform on oneself. There even seems to be a tacit awareness that it's not actually magic, though I don't think the awareness ever becomes explicit.

The weirdest thing about it is that there is no single set of symbols or doctrine, chaos magicians create their own or steal bits from other magical or religious traditions.

There even seems to be a tacit awareness that it's not actually magic, though I don't think the awareness ever becomes explicit.

AFAICT most chaos magicians believe in the supernatural, but at least some have been thoroughgoing materialists who believe only in the psychological power of suspension of disbelief.

The few times I experimented with similar techniques (usually expressed as self-help rather than magic), I found it impossible to suspend disbelief, and ended up laughing at myself and giving up. I wonder if other rationalists would have the same experience, and if this would be different between self-selected rationalists and random people put through a course in a rationality dojo. I'd also like to see to whether this is mediated by the hypnotizability trait.

So many studies to do, so little status as a real psychologist who can do studies and stuff. I do want to run a survey of Less Wrong members and gather some demographic/other interesting data once I get working survey software, though.

I did a ritual with them once. She observed that I was not putting the work into running an event because I had not mentally committed to running it, that I was half convinced I would drop out. So she sat me down and said "now's the time to decide, are you doing this or not?" and presented me with... a red and a blue Smartie. I took the red Smartie.

...and ended up dropping out anyway a couple of months later, leaving her running things to her dismay, but coming back and playing a big role in running it later still in any case. So, a very memorable ritual, but not a great success.

Commitment rituals aren't very useful for telling whether you're actually committed. The real test is whether, for all the worst-case scenarios you imagine possible, you feel you can accept and handle that as the outcome.

That is, even if the worst happens, you believe you can feel like you made the right decision. (Not, "well, that shouldn't/probably won't happen", but "if it DID happen, could I deal with it?")

And that's not something that "taking the red pill" is going to make happen, since it most likely induced you to suppress your doubts, rather than face them.

Dude! using surveys to collect scientific data! that would be totally awesome!

Post a poll asking people to do stuff, and then come back and use their results as the poll result. Like, "squirt water in your ear, and tell us of the result", "Ask a set of questisons to strangers in the street, 10 times dressed as in a suit, 10 times dressed as a clown, 10 times dressed as a scruffy activist".

I found it reasonably easy to suspend my disbelief when I was messing with chaos magick, but I did also find it easy to tell that it didn't work, so...

once I get working survey software, though.

I work for a company that programs and hosts market research surveys - feel free to drop me a line and I'll see if we're interested.

I was thinking more along the lines of something very simple and very free, since I have minimal computer skills and financial resources. I have SPSS for analysis, so all I need is a form that collects responses and sticks them into a spreadsheet. One of my friends helped me do a survey before, and I was planning to pester him until he showed me how to set something up.

If you're a professional, though, I defer to you if you want to do the Unofficial Less Wrong Survey instead. Or if you want to cooperate on it, email me at yvain314@hotmail.com. If not, no worries; I'm sure I can set something up eventually.

The few times I experimented with similar techniques (usually expressed as self-help rather than magic), I found it impossible to suspend disbelief, and ended up laughing at myself and giving up.

The most interesting thing I've read on it says you ought to laugh and make fun of yourself, because chaos magic is ridiculous and can't possibly be real. Then you forget about it, and then it works.

I'm not sure about the last bit, though.

It would not be a stretch to say that the Yudkowsky story alluded to in the main article exhibits some of the same psychological tricks that 'chaos magicians' practice. The pure white rationality dojo, the symbolic plaques, the importance invested in each of the rooms, the sort of cleansing ritual and meditation techniques the main character uses. And most importantly to Chaos Magicians:

"Symbols could be made to stand for anything; a flexibility of visual power that even the Bardic Conspiracy would balk at admitting outright."

The power of symbols is super important.

Even the more supernatural aspects of Chaos Magic could be looked upon as a practice in belief annihilation, much like Eliezer's story. By examining the mechanism of belief through practice and meditation, one starts to exhibit more control over one's beliefs. More control, the more likely one will catch irrational beliefs. When one has complete control over beliefs, then rationality can control the monkey brain. When you laugh at something, like Yvain does below, you're exhibiting some control over what you're taking seriously. That's good, so long as what you're laughing at DESERVES to be laughed at.

btw, laughter is a chaos magic dispelling/banishing technique :D Thought that was interesting.

Suspension of disbelief merely means that you withhold your rejection of input as false. In order to understand something at all, you have to temporarily represent it in your mind as true... which is why there are studies that show you can convince people of things by telling them the idea and then distracting them before they have a chance to analyze or reject it. Disbelief is an active, conscious process; belief is the default.

(Which makes sense evolutionarily -- camouflage and deceptive behaviors are perceptual problems, not logical analysis problems. An ability to logically disbelieve probably had to evolve as a defensive weapon against human liars -- it doesn't make much sense to have it until you also have language and imagination.)

By my meaning of the phrase, they have, your nominally Orthodox friend certainly has, and everyone who watches a film does it all the time.

Tell you what - tattoo "Less Wrong" on your palm and post a picture on here

Is there a "Less Wrong" logo? The "map/territory" imagery in the banner is interesting, but seems too complex as a logo/tattoo.

I find that I find comfort in brining my smarthphone everywhere, and knowing that I have internet access, even when I'm almost certainly not going to use it. It can be used as a symbol for any specific website or community, any specific person I could contact, of the internet and humanity itself, and of how scifi the world is because such devices exists. Etc.

This is somehting I intentionally use, wearing it like The One Ring around my neck, hugging it to my chest if I'm feeling uncomfortable, sometimes petting it and/or whispering "my presssiusss", etc. (And yes, that's literary; pop culture references is somehting I associate with the internet.)

Can we use this to our advantage? Maybe tattoo "Less Wrong" on one's palm and look at it when in distress?

btw, this is almost the sort of thing a chaos magician would do. They would create a symbol by smashing the letters in Less Wrong together and then tattoo the created symbol on their palm. Or at least keep copies of it around all the time.. Perhaps the symbol has more priming power than mere words. Any evidence of this?

Not that I know of.

This is a bit shaky; but I think I recall from reading Peter Carroll and Austin Osman Spare that one's supposed to create a sigil, energize it, then put it somewhere you'll never see it again; not perpetually see it. I would expect a double-blind study to find more priming from explicitly visualized concepts than from obfuscated symbols, but it might be interesting to throw that in as another option alongside various vectors for subliminal messages.

[-][anonymous]15y20

The 'blanking your mind' strategy seems like a winning one, similar to the adage "count slowly to ten" you're supposed to do if you're angry at someone, or taking deep breaths to relax if you're feeling stressed.

What these seem to share is they focus your attention on something inert, like numbers or breathing, breaking whatever unconscious cycles are reinforcing your anger or stress. Its not obvious to me whether the same effect will occur with a much more subtle process like priming, so I'd be interested if any studies have been performed in this area.

Anyone who is deciding who to vote for on the way to the polling place should not be voting.

Can priming influence also longer held beliefs? I would suppose that since people usually dislike to change their mind, it becomes extremely important where they learn first about particular facts. For example it would be probably beneficial if children were first informed about evolution during some school excursion in a hi-tech laboratory, as opposed to during church sermon (even if the information were the same).

I am not a psychologist, but every time I've come across priming it has been presented as a short-term phenomenon caused by temporary brain activation. I don't think what you're describing could be called priming.

But I have heard of some results along those lines. For example, if you learn something in for example a classroom, you are more likely to remember it on a test given in a classroom than in (for example) a church, and vice versa (link to study). And this even generalizes to people who learn something in a red room recalling it better in other red places (link to study) That makes effects like the one you describe sound possible, though I don't know of any studies of it directly.

What I have been thinking about was, given that priming influences decisions made not much long after the stimulus, whether it can also influence beliefs. When you first encounter a proposition you usually sort it into the categories of believed or disbelieved statements, and it can be viewed as a special type of decision. I admit that whether beliefs formed under the influence of priming wouldn't revert in the long term is a different question.

A related question is: Do people later regret their decisions more often if they made them under priming? I know that people are unaware of being primed, but could somehow realise that their decision was wrong. Is there some study about this aspect?

For example, if you learn something in for example a classroom, you are more likely to remember it on a test given in a classroom than in (for example) a church, and vice versa (link to study). And this even generalizes to people who learn something in a red room recalling it better in other red places (link to study)

Most mentions of that study I've heard in meatspace concluded with “Thus, I'm going to study while drunk, and to take the exam while drunk.”

[-][anonymous]15y20

There's a relatively new branch of psychology called Terror Management Theory that specializes the general phenomena cited above for instances where one is reminded of one's mortality. I've only read a couple journal articles in the field, and I'm not a psychologist, but one experimental design in particular struck a cord in me.

I'm no longer certain of the study, but they primed their candidates with a short story about either life insurance policies or (for the control group) the imports and exports of a certain country. Then, they had the subjects try to complete a list of partially-spelled words specifically chosen so that they had two interpretations -- one morbid, the other benign. The only pair I remember was skull and skill, both derivable from S**LL.

Then, to get the cross-cultural study, they found these word pairs in modern Hebrew! How cool is that! I should dig that study up again. I wonder if they used the truth/death similarity exploited by the story of the Golem.

Sheldon Solomon, one of the big names behind terror management theory, was the one who conducted the WTC study. He also did a related experiment in the same study where he made people think about their own deaths and found they were more likely to vote Bush afterwards. I think there's a description at the same link. Good catch.

[-][anonymous]15y10

That's unfortunate. I was hoping for more than one clique, but it looks like my half-remembered study "Evidence for terror management theory, part II" (annoyingly not free) is by roughly the same group of people as the WTC study you cited.

[-][anonymous]15y00

The message I'm taking away from this is we can solve Friendly AI by having the programmers wear Sinterklaas suits.