Many of you will be familiar with the "Wisdom of the Crowd" - a phenomenon where the average result of a large poll of people's estimates tends to be very accurate, even when most people make poor estimates. I've written a short poll to test a small variant of this setup which I would like to test. 

Please fill out this short poll.

Specifically, I want to see how the weighted average of the results performs when the question is posed as "is the value in question closer to A or B?" This change is inspired by your usual two-party election where people choose between two extreme values, when many voters have opinions in the middle of the two.

Thank you for helping!

 

I'm a little worried about anchoring in this survey. Suggestions for how to improve it would be appreciated.

18 comments, sorted by Click to highlight new comments since: Today at 3:31 AM
New Comment
[-][anonymous]10y 3

I interpreted “closer to A or to B” literally, i.e. “earlier or later than (A + B)/2”. Since, under this assumption, the question means exactly the same thing if you use (say) (A + 10) and (B - 10) instead, if other people thought the same way as me then the weighted average is meaningless, since it also depends on the very choice of A and B rather than just on their average (unless exactly half choose A and half choose B).

This was the intended interpretation and I was considering wording the questions this way. However, the way I used is worded more similarly to what I am trying to investigate.

Given that (A+B)/2 is the decision point, maybe tgb shouldn't be worried about anchoring - it's just the nature of the beast. An anchoring effect should be expected.

[-][anonymous]10y 0

Indeed. The way the question is asked appear to suggest that the correct answer is either somewhere near A or somewhere near B, so if A and B were 1828 and 2012, or 1917 and 1918, the anchoring effects would be different.

Thanks for pointing this out - this was not the intended reading.

(I took the poll.)

It might also be interesting to try a variant of the "closer to A or B?" question in which you vary A and B. (It could even be done adaptively. Start with A and B far apart. Once you get some consensus, replace the less popular answer with the weighted average. Repeat. Or something like that.)

Good suggestion. Your particular process would have difficulties - eg: the over-estimate answer being very popular could easily (and would with the current results) make the weighted average above the actual value so replacing the lower with the weighted average would make both answers too high.

After reading the comments, I realized that I misunderstood the even/odd time question. I thought we were supposed to guess it, and there was some way to check which was right. I guess it's still random enough. I tend not to read instructions that thoroughly.

[-][anonymous]10y 0

That's how I interpreted it too at first, but I decided that was too implausible so I re-read it before answering.

Could this effect be used to find the approximate answer for unsolved questions, or does their need to be an answer?

There are some websites that operates on the assumption that this works on unsolved problems. You can ask questions of unknowns (eg: "how many copies will my book sell?") and maybe get some useful information. I can't recall the names though.

I'm a little worried about anchoring in this survey. Suggestions for how to improve it would be appreciated.

Using a randomized A/B redirect script is an option.

Google docs doesn't support (or has hidden this feature beyond the powers of my googling). Writing my own solution would be do-able, but without a server, this would take far too long. Do you know of any (free) sites that allow such options?

Edit: I'm also uncertain what exactly you are referring to. Is this simply to replace the first "even/odd" question? Or are you suggesting randomizing the order of the questions or something more?

Did the odd version. Interestingly, I was off in the opposite direction from my expectation on the second question.

Just to toot my own horn, I did a similar experiment in meatspace which confirmed the effect.

I also completed it.

I assume the even/odd question is to conduct people to one of two different surveys. However it may cause some anchoring by having people look at a number right before answering numerical questions. I don't know how you could easily fix that though.

Yes, the even/odd question splits people up so that I can have half the people answer the first question in an open-ended format and the second in the "closer to A or B" format, and vice versa for the other half. I thought about anchoring effects here as well, but I couldn't come up with any other easy way to split people up. Ideally, having Google Docs do the random split itself would be ideal, but it doesn't seem to have that capability.

If anyone knows another host for such polls that does allow randomization, please let me know.

I've completed it. Best wishes.