An Anchoring Experiment: Results

by prase1 min read3rd Apr 201121 comments

18

PrimingAnchoring
Personal Blog

This post summarises the results of the experiment which tested how anchoring works on the LW audience. Here is the original post which describes the experiment in more detail. The experiment was supposed to decide between two ways of how anchoring may work. The first hypothesis is that the subject always starts from the anchor and continues in the direction of his/her unbiased estimate, but doesn't go far enough. The alternative hypothesis is that anchoring shifts the centre of the subject's probability distribution towards the anchor, and the whole distribution moves along.

To illustrate the difference, consider the first experimental question, which was about the population of the Central African Republic. The correct value (i.e. the estimate for 2009 listed on Wikipedia) is 4,422,000. The anchor which I have offered here was 20 million. Now, if the first hypothesis is true, the people who, in their unbiased state of mind, would guess less than 20 million, would slide down starting from the 20 million value and stop prematurely; their guesses will be attracted towards the anchor, but not across. The distribution of the biased guesses would be narrower and overall closer to 20 million than the unbiased distribution, but the probability of answering whatever number lower than 20 million would not be changed by anchoring. On the other hand, if the second hypothesis holds, the biased group should guess more than 20 million more often than the control group.

The actual results are such:

Group I (biased; 36 answers collected)

  • more than 20 million: 15 (41.7%)
  • less than 20 million: 21 (58.3%)

Group II (control; 16 answers collected)

  • more than 20 million: 3 (18.7%)
  • less than 20 million: 13 (81.3%)
  • 20 million: 1 (6.3%)

The second question asked for the altitude of the highest point in Sweden (2140 m / 6903 ft). The anchor was 3500 m or 11500 ft (there is about 5 m / 18 ft difference between the values, but I wanted both the metric and the imperial anchor to be round numbers). Here the results are:

Group I (biased; 24 answers collected)

  • more than 3500 m: 9 (37.5%)
  • less than 3500 m: 15 (62.5%)

Group II (control; 30 answers collected)

  • more than 3500 m: 5 (16.7%)
  • less than 3500 m: 13 (83.3%)

The results seem to favour the second hypothesis.

Some more remarks: The participants were expected to change the groups between both parts and the numbers should reflect that, however, six and eight answers are missing from the group II summaries. Few people (about 16% and 33% actually) thus refused to guess the concrete number although they voted in "greater/lower than anchor" questions. It may skew the results, although I don't see in which direction.

There were few weird answers, too. The altitude of the Sweden's highest summit was reported to be both 100 m and 5000 km. Those can be simply interpreted as statistical deviations from common sense* (or typos), however I started to doubt whether all participants were serious. (Which leads to a moral: If you intend to post a survey and be certain about its accuracy, don't do it on the 1st of April.)

Finally, I would like to thank all the commenters who had pointed out several technical problems with the test (such as the answers appearing in the "recent comments" bar).

*) The 100 m guess may even be reasonable: The summit of Yding Skovhøj, the highest point of extraordinarily flat Denmark, lies only 175 metres above the sea.

Priming2Anchoring2
Personal Blog

18

21 comments, sorted by Highlighting new comments since Today at 1:34 AM
New Comment

While this is interesting this seems to have enough confounding variables that I'd really like to see this duplicated in a more controlled setting.

Definitely. I support doing this again.

Definitely. I support doing this again.

I'd prefer doing it with a random sample of people who haven't been told about anchoring before. LW is in general hopelessly tainted by people aware of their cognitive biases to do any useful tests about what the standard biases are. And who happens to reply to a comments thread is a terrible randomization procedure.

Hey, maybe at some point it actually would be useful to study the effects of biases on people who already know about biases, compared to the general population.

(Maybe psychologists do this already, i wouldn't know.)

I think the point was to see if LW readers could avoid a bias they knew they were being tested on.

Agree - interesting to see such a strong effect on people who should have been trying their hardest to ignore the anchor. (assuming some unrelated aspect of experimental design isn't skewing the results)

Would also be interesting to have a personal quiz with many questions so that you could test your own anchoring bias. (And see if you could train yourself out of it... at least in circumstances where you know it's an issue)

Disclaimer: didn't take part in the experiment myself (ugh factor/fear of failing on general knowloedge quizzes)

I think people haven't been trying their hardest (nor they were instructed to). To try hardest in such a case means to use reasoning similar to this comment, which probably only minority did.

If you can do it in a more controlled setting, I would be interested in the results. As others have pointed out, the way I have done it has many problems: lack of controlled randomisation, small number of participants, possibility to see the anchor by accident and still answering the group II question etc. Unfortunately I have no idea how to organise a controlled experiment and don't particularly want to invest much money in it (I suppose people don't like to spend an hour filling questionnaires for free).

You could talk to someone at a nearby university and see if they'd be interested in collaborating with you further. That would also probably help with the funding issue.

I don't know of any research that has found a wrong direction effect from anchoring, so this may very well be original research on Less Wrong. Nice job.

I think that it could be consistent with existing models of anchoring. There are two theories for why anchoring effects occur, insufficient adjustment and selective accessibility.

Insufficient adjustment: People get their estimate by anchoring and adjustment, as you described. They start at the anchor, realize it's too high (for example), and adjust downward until they reach a plausible value. But they tend to stop too soon, since there are a range of plausible values and they don't go all the way to the most plausible value.

Selective accessibility: Answering the higher/lower question makes you bring to mind information that is relevant to that question. This is a biased set of information, since you bring to mind (for example) information related to the population being over 20 million when considering that possibility. Then when you go to make an estimate in response to the second question, that biased set information remains accessible in your mind and contaminates your judgment.

The general consensus among researchers is that both theories are correct - both processes occur. But there is ongoing disagreement among researchers about precisely when each of the two processes takes place. A few years ago the thought was that there are some cases where anchoring & adjustment takes place and others where selective accessibility happens - with questions like the ones you'd asked, anchoring effects would be due to selective accessibility rather than anchoring & adjustment (as discussed here and here on LW). Since then, a 2010 paper by Simmons, LeBoeuf & Nelson has argued that the typical case is for both selective accessibility and anchoring & adjustment to be operating, so either or both could be involved.

I think that your effect here is consistent with the selective accessibility account. If you're considering whether the population is more or less than 20 million, you might bring to mind reasons why the population could be more than 20 million and reasons why the population could be less than 20 million. That's a biased set of all the information that you have, even for the initial question, which will contain more information consistent with the wrong direction than you'd get in a representative sample of your information. That explains why you get closer to a 50/50 split in people's answers compared with what you'd get if you just looked at people's estimates without the anchoring question. Part of the inspiration for the selective accessibility model was previous research on confirmation bias showing that considering a hypothesis makes it seem more likely, and this wrong direction effect seems consistent with that research.

With anchoring & adjustment, I think you could say that this is a case where the anchor value is already within the range of plausible values. It's not clear what the model says about those cases, but perhaps it could account for the wrong direction effect. If you see the anchor value and think that it's a plausible answer, but you're forced to say "higher" or "lower", then maybe the choice seems kind of arbitrary and ends up closer to a coin flip.

Simmons, J.P., LeBoeuf, R.A., & Nelson, L.D. (2010). The effect of accuracy motivation on anchoring and adjustment: Do people adjust from provided anchors? Journal of Personality and Social Psychology, 99 (6), 917-932. pdf

Thanks for the citation. I am not aware about the ongoing research in the field and have only very rudimentary knowledge of the relevant theories, so each qualified comment is welcome.

For me, my answers would have been incredibly varied simply because I had no idea, whatsoever.

Testing for anchoring bias regarding a quantity you were fairly certain about wouldn't yield significant results. Of course, you do have some idea--if I asked you whether the population of the Central African Republic were more or less than 7 billion people, you could confidently answer "lower."

So, like me, you had a probability distribution regarding the answer, but it was diffuse enough that the "20 million" anchor moved it upward. At least, that seems to be the case.

Congratulations on the original research, Prase!

So, like me, you had a probability distribution regarding the answer, but it was diffuse enough that the "20 million" anchor moved it upward. At least, that seems to be the case.

Probably. (BTW, I'd kinda prefer if this type of article give people a chance to answer the question for themselves, to see how they did)

The number of respondents in both control groups don't add up. It looks like you had 17 answers for the first question's control group, and 18 for the second.

As someone who participated, I must have misread the directions because I voted higher or lower than anchor on both questions.

Also, a link to the original experiment post might be nice.

Also, a link to the original experiment post might be nice.

The first word of the second sentence.

Sorry if I'm just being stupid, but why does this go against the hypothesis of subjects "start[ing] from the anchor and continu[ing] in the direction of his/her unbiased estimate, but [not going] far enough." While the percentages are vastly different, the favored option is the same between biased and unbiased groups in both cases.
Also, and sorry if this seems like nitpicking, but in the example we talked about in the Mental Metadata I imagine the subject would have a much stronger feeling about their expected values than in these cases. That aside, interesting experiment.

If you just started from the anchor and moved toward your unbiased estimate, it would be impossible to end up on a different side of the anchor from your unbiased estimate. Thus, the amount of people on each side changing between anchored and non-anchored is evidence that something else is going on.

Yeah, I noticed that soon after posting but figured it would be kind of silly to edit out almost my entire post. And deleting it just seemed like a sort of suspicious behavior (considering that I was wrong).

Yeah, I noticed that soon after posting but figured it would be kind of silly to edit out almost my entire post. And deleting it just seemed like a sort of suspicious behavior (considering that I was wrong).