Didn't have the time to read the article itself, but based on the abstract, this certainly sounds relevant for LW:

Recent advances in information technology make it possible for decision makers to track information in real-time and obtain frequent feedback on their decisions. From a normative sense, an increase in the frequency of feedback and the ability to make changes should lead to enhanced performance as decision makers are able to respond more quickly to changes in the environment and see the consequences of their actions. At the same time, there is reason to believe that more frequent feedback can sometimes lead to declines in performance. Across four inventory management experiments, we find that in environments characterized by random noise more frequent feedback on previous decisions leads to declines in performance. Receiving more frequent feedback leads to excessive focus on and more systematic processing of more recent data as well as a failure to adequately compare information across multiple time periods.

Hat tip to the BPS Resarch Digest.

ETA: Some other relevant studies from the same site, don't remember which ones have been covered here already:

Threat of terrorism boosts people's self-esteem

The "too much choice" problem isn't as straightforward as you'd think

Forget everything you thought you knew about Phineas Gage, Kitty Genovese, Little Albert, and other classic psychological tales

 

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 4:07 AM

I read the Phineas Gage article, but it wasn't that useful to me; it doesn't much affect any of the philosophical points you might make about him, it's all about did he make a recovery in order to drive a coach in Chile.

As I saw it, the main points wasn't any philosophical teaching about him as such, but rather a "beware of trusting unverified sources too blindly".

(Which, admittedly, could be considered to be contradictory with me posting the link to an article I haven't read...)

I'm voting this down for starting out with "Didn't have the time to read the article itself, but". If you can't take the time to read the article, then forward it to someone who might be interested so they can decide whether there's anything new in it. I don't think we want LW to be a place for exchanging pointers to articles with interesting abstracts.

I disagree. I think there is currently too much pressure to post original or thoroughly commented material. I don't expect something like this to be voted highly or promoted, but I think an abstract and the broad recommendation of a blog like BPS Research is worthwhile.

Fair enough, but the content of the abstract itself already had content that was new to me. Actually, the abstract alone was useful for me, as it hadn't occurred to me that feedback might actually be harmful in such an environment. In retrospect it's obvious, but I hadn't happened to think about it.

I often scan through abstracts of scientific articles, only absorbing the information contained in them. Yes it would be better to read the whole articles, but there are too many interesting articles to read them all, and if I ever actually need to check up on the details, I can return to the article in question.

I agree. Don't make a post consisting soley of a link to something you haven't read.

Even if you've read them all, I don't think the "bag of links" approach fosters discussion. It would be better on Open Thread.

Would it have been better to post them all individually? I considered that, but I wouldn't really have had very much insightful to comment that wouldn't already have been obvious from the articles themselves.

Would it have been better to post them all individually?

I think the correct answer is for you to post them as you did, and then be somewhat downvoted for it. "vote down" (for a post) doesn't mean "You're a bad person - I punish you!", it means "I think this is amongst the worst articles on this site". If we have excellent articles, you shouldn't feel too bad about a bunch of links having that distinction.

Let's err on the side of posting too many links to interesting looking peer reviewed research for now; we can start demanding stricter standards if it turns out to be a problem.

I'm undecided on post links/don't post links. I'm firmly decided that you shouldn't post links to things you haven't read.

I haven't read the article, but the more noise there is, the more samples you should get before giving the feedback significance.

A recent Science article showed this graphically. They were using a two-deck game, where you give subjects two decks of cards, each with 2 types of cards, winning and losing. You tell them that one deck has more winning cards than the other. Then they get a certain number of card draws, and can draw each card from either deck.

What's amazing is how lousy people do on this test. You would win by drawing 10 cards from each deck, choosing the better deck, and sticking with it. People don't. (Received wisdom is that they want to draw from each deck in proportion to its probability of a winning card, which is not a good strategy.)

In this paper, they posited that people estimate the probability of a deck being the better deck by exponentially discounting older evidence. (The most recently drawn card influences them most.) They plotted a graph of the probability over time that this would give you of one deck being the winning deck. Amazingly, this graph showed the probability hovering around .5 over 90 card draws, when actually the difference between the decks was dramatic (something like .6 win vs. .4 win).

I think the really telling finding is that people who had been recently taught how to solve the problem did not reach for that knowledge when faced with the actual challenge.

In Experiment 1 and later studies, we found no significant differences between participants who had been exposed to the newsvendor problem by taking the core operations class and those who had not.