Photo by Danielle Cerullo on Unsplash

Over the course of nearly two decades in the workplace, I’ve seen the inside of dozens of organizations and teams as an employee, consultant, or friendly collaborator. With rare exceptions, it seems, important decisions get made one of two ways. If an organization is particularly hierarchical or founder-driven, the leader makes the decision and then communicates it to whoever needs to know about it. Otherwise, the decision gets made, usually by group consensus, in a meeting.

All too often, those meetings are decision-making disasters.

Or at any rate, that’s what the research says. It might not be apparent right away — one of the sneaky aspects of group deliberation is that it reliably increases confidence in the final decision, whether that decision was a good one or not! And yet most group deliberations don’t do much to combat the many cognitive biases we carry into those groups as individuals. In fact, meetings tend to make many of those biases even worse.

Studies show that several biases that are especially relevant in group settings are among those exacerbated by the group decision-making process:

  • planning fallacy (thinking things will get done sooner or cost less than turns out to be the case)
  • framing effects (seeing a situation differently depending on how it’s presented)
  • egocentric bias (assuming that other people are more like you than they are)
  • the representativeness heuristic (relying too much on stereotypes)
  • unrealistic optimism
  • overconfidence
  • sunk cost fallacy (doubling down on a losing strategy)

Groups do perform slightly better than individuals when it comes to availability heuristic, anchoring, and hindsight bias, but the errors are still there.

All of this is chronicled in an essential (if not exactly riveting) book called Wiser: Moving Beyond Groupthink to Make Groups Smarter by Cass Sunstein and Reid Hastie. The duo set out to have Wiser do for group decision-making what Daniel Kahneman’s Thinking, Fast and Slow did for individual decision-making — lay out, in comprehensive fashion, what cognitive science says about group decision-making biases and how those biases can be overcome. Though Wiser is a small fraction of the length of Kahneman’s grand opus, it makes a convincing case that decision-making in groups is a distinct enough phenomenon that it merits distinct analysis. For the purposes of improving decision-making and leadership in professional settings, moreover, it covers far more relevant literature than the better-known work covered in earlier books.

How Groups Fall Short

According to Sunstein and Hastie, groups fail for four interrelated reasons:

  1. Group dynamics amplify the errors of their members
  2. Group members follow the lead of those who spoke or acted first (“cascade effects”)
  3. Group discussion leads to polarization of viewpoints
  4. Groups privilege public/shared information over private information.

As a social species, our brains are hardwired to take other people’s behavior into account when determining our own. We don’t even need to be physically around other people for that to be the case! Sunstein and Hastie, for example, cite examples of studies in which artificially boosting a comment on a website or revealing how many downloads a song got had enormous influence on participants’ subsequent ratings of quality. And when we are near each other, of course, the pull to seek safety in the herd can be even stronger. The authors describe three types of “cascades” that can cement groupthink in place during meetings: information, reputational, and availability. Information cascades happen when successive speakers in a meeting agree with the first speaker, suppressing their doubts due to an assumption that the previous speakers know what they are talking about. Future speakers then assume that opinion is much more unified than it is. Reputational cascades happen when speakers in a meeting want to avoid being seen as argumentative, difficult, or not team players because they are disagreeing. Availability cascades take place when some issue or comparable situation has high salience for the group, blocking out alternative scenarios or interpretations.

As meetings unfold, the judgments of individual group members shift based on who has spoken and what opinions have been expressed. And by and large, those opinions will move in the direction of what the majority of the group members thought in the first place. People who were originally on the opposite side of the issue will start to have doubts or become persuaded, and people who were on the majority side to begin with will become even more confident that they’re correct. This phenomenon is known as group polarization, and it’s a natural consequence of informational and reputational cascades combined with the fact that groups almost always have some initial leaning in one direction or another. More than 100 studies across twelve different countries have found evidence of this effect in group deliberations, and it applies not only to judgments of facts but also values and risk preferences. Polarization is especially potent in groups of like-minded people, and the increase in confidence that comes from polarization serves to accelerate it. It’s not hard to understand why: it’s a stressful experience to go against majority opinion. People will doubt their own judgments and fear being marginalized, and will agree to just about anything if they would otherwise be the sole dissenter in a group.

Making things even worse, groups are really inefficient at sharing information with each other. In any group, some facts and perspectives will be shared by most or all participants, while others may be known only to a few. Not surprisingly, the widely-shared information is likely to have more influence on the judgments of the group — even if the information that only a few people know is no less accurate or relevant to the situation. Not only is shared knowledge more influential in group deliberations, the research shows, but people who have access to more private information participate less and are taken less seriously when they do. That’s especially the case for group members who are seen as lower-status in those settings, including less educated people and more junior employees.

To summarize, then, groups come into decision-making meetings with preconceptions about what the right answer is. The people who speak first and most often are those whose views are most likely to match the preconceptions of the majority of the members. The people who speak subsequently face pressure to quash their doubts or present them in a more moderate light in order to avoid appearing to be too much of an outlier. After a while, meeting participants sitting on concrete information that might promote an alternative viewpoint are unlikely to speak up at all.

It is possible for the group to move in the direction of better-quality judgments as a result of these factors — it depends on whether the group’s initial leanings were accurate or not. In practice, though, research suggests that this is the more the exception than the norm. Broadly speaking, group deliberation performs worse than simply aggregating the individual judgments of group members separately.

Are There Better Ways to Decide Together?

To be honest, these findings represent a damning repudiation of nearly all the most common meeting facilitation practices I’ve seen in organizations. It is way too easy for a group to feel great about a poor decision it’s about to make. And because there’s often no immediate feedback to indicate that the decision was ill-advised, it may be a long time, if ever, before the group members realize their mistake.

So what can we do about it? In general, the research on combating bias is a ways behind the research identifying such biases in the first place. But Sunstein and Hastie run through a number of strategies with varying degrees of evidence behind them to mitigate or avoid the problems endemic to deliberating groups. The strategies are rooted in the premise that it’s essential to preserve cognitive diversity — or different thinking styles and perspectives — in group settings in order to counter the tendency toward conformity. The authors also recommend finding ways to separate the divergent (figuring out what the options are) and convergent (choosing the best among them) components of decision-making into two separate phases, since they involve very different thinking processes and may even benefit from having different people involved.

Several techniques seem especially worthy of further exploration or wider adoption in real-world settings:

  • Having leaders suppress their own opinions at the start of deliberations and explicitly express a desire to hear new information
  • Setting cultural norms that encourage critical thinking and productive disagreement
  • Red-teaming, or negatively focused scenario planning, for countering excessive optimism or robust risk management
  • Adopting the Delphi method or variants of it, in which group members make estimates or express their opinions individually prior to group discussion, and then return to individual expressions at the end

As suboptimal as the typical group meeting may be, a big problem leaders face is that making decisions any other way feels really uncomfortable for a lot of people. It’s not just about aversion to conflict, although that’s certainly a factor. It’s also that most interventions to improve the decision-making process have the side effect of slowing that process down. That’s not always such a terrible thing in the abstract, but in many organizations there is a culture of urgency to resolve uncertainties about the path forward as soon as possible once the decision rises to the top of the agenda. To improve results by improving the process, managers may first have to instill a discipline of strategic patience among their teams when it comes to identifying and acting on the most important decisions to be made.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 1:40 AM

I'm gonna pull a Hanson here. What makes you think group meetings are about decision making?


I think the primary goal of many group meetings is not to find a solution to a difficult problem, but to force everybody in the meeting to publicly commit to the action that is decided on. This both cuts off the opportunity for future complaining and disobedience ('You should have brought that up in the meeting!') and spreads the blame if the idea doesn't work ('Well, we voted on it/discussed it'). Getting to the most effective solutions to your problems is secondary to achieving cooperation within the office.

Most group meetings are power games. Their main purpose is to, forcibly or not, create long-term cooperation by the people in the meeting. This is why they are often 'dull', or 'long', or 'ineffective' - the very cost you incur by attending is a signal of your loyalty and commitment. Trying to change this would make meetings less effective, not more effective.

How about another angle.

Most meetings are not just power games. They are pure status games. Only in such group meetings can you show off. Power plays are one way to show off.

You will speak quickly and confidently, while avoiding to make any commitment to action. If you attend someone else's meeting, you quickly interrupt and share your arguments in order to look confident and competent.

The low status meeting participants are mainly there to watch. They will try to quickly join the highest status viewpoints to avoid loss of more status, thereby causing cascades. As high status person you can deflect actions and delegate actions to a low status participant, thereby further boosting your status.

Being seen as the one who made the decision is nice. Deliberately delaying a decision by arguing for more data is also fine. Visibly polarizing an audience to your viewpoint is an amazing status spectable!

Most meetings are status games. They are boring for the low status participants who have little chance to gain status. But these meetings are what keeps the high status participants going. And it's an opportunity for careerists to grow in status. All decision making and cooperation is irrelevant or a side-effect.

I certainly expect status games, above and beyond power games. Actually saying 'power games' was the wrong choice of words in my comment. Thank you for pointing this out!

That being said, I don't think the situation you describe is fully accurate. You describe group meetings as an arena for status (in the office), whereas I think instead they are primarily a tool for forcing cooperation. The social aspect still dominates the decision making aspect*, but the meeting is positive sum in that it can unify a group into acting towards a certain solution, even if that is not the best solution available.


*I think this is the main reason so many people are confused by the alleged inefficiency of meetings. If you have a difficult problem and no good candidate solutions it is in my experience basically never optimal to ask a group of people at once and hope they collectively solve it. Recognizing that this is at best a side-effect of group meetings cleared up a lot of confusion for me.

As another commenter noted, there exists an alternative strategy. Which is to organize a lot of one-on-one meetings to build consensus. And then to use a single group meeting to demonstrate that consensus and polarizing the remaining minority. This may be a more efficient way to enforce cooperation.

Anyway, I wonder if there is a good method to find out the dominant forces at play here.

I don't dispute that the phenomenon you're describing is real, but purely as a data point I'd offer that in the majority of my recent experiences working with organizations as a consultant, managers have not explicitly sought to use meetings this way, and in a few cases they have proactively pushed for input from others. It's certainly possible that the sample of organizations I'm working with is biased both because a) they are mostly nonprofits and foundations, and b) if they are working with me it's a signal that they're unusually attentive to their decision-making process. But I don't want people reading this thread to be under the impression that all managers are this cynical.

But I don't want people reading this thread to be under the impression that all managers are this cynical.

I think it's a mistake to see this as simply being about being cynical. A CEO might justly believe that infighting within his company is a bigger problem then decision quality and focus on using meetings as a way to get people to cooperate better with each other. 

The polarization effect isn't all bad. What an organization needs from a decision-making process isn't just to find out what the right decision is, because typically the point of making the decision is that the organization then needs to do something, and it will do it more effectively if everyone involved agrees with the decision. So a decision-making procedure that makes wrong decisions more often, but gets everyone involved onside, might actually be more effective in advancing the organization's goals. Sometimes. Maybe. With a lot of luck.

When I was a manager needing to build consensus -- especially with other managers outside my department -- I found it much more useful to get one-on-one meetings to feel out people's needs and negotiate buy-in well before any larger meetings. Trying to get consensus in a big meeting was a big waste of time, except maybe sometimes within my own department. The big meeting is really just an opportunity to show the higher-ups that all the other departments are already on-board with my plan. ;-)

This is an approach I recognize. It works well, except if many one-on-ones are happening in parallel on the same topic. Then you are either in a consensus building race with adversaries and/or constantly re-aligning with allies.

Yeah, this is actually one of the key takeaways of the arpa parc paper, leadership's role isn't so much to control or to make very many decisions, their job is to keep everyone lined up with a shared vision so that their actions and decisions fit together. Alignment is the thing that makes organizations run well, it's very important.

That's a great point, and I don't think the takeaway here is that meetings have no purpose. Instead it's that there are better ways to make decisions in a meeting or meeting-like context than most organizations use. People could adopt some of the techniques mentioned by the book authors to change the meeting structure, and still get the benefit of buy-in from having a meeting at all.

I had the recollection that Mercier & Sperber would have cited evidence for humans being good at making decisions in group, and looked it up. Apparently their interpretation is that group meetings are bad in those cases where everyone feels like they need to express an opinion, even when there's no need for one. Another difference is that they discuss decision-making in situations in which an objective correct answer exists (so that everyone can verify it once it has been proposed), which is probably not the case for most business decisions.

If people are skilled at both producing and evaluating arguments, and if these skills are displayed most easily in argumentative settings, then debates should be especially conducive to good reasoning performance. Many types of tasks have been studied in group settings, with very mixed results (for recent reviews, see Kerr & Tindale 2004; Kerr et al. 1996). The most relevant findings here are those pertaining to logical or, more generally, intellective tasks “for which there exists a demonstrably correct answer within a verbal or mathematical conceptual system” (Laughlin & Ellis 1986, p. 177). In experiments involving this kind of task, participants in the experimental condition typically begin by solving problems individually (pretest), then solve the same problems in groups of four or five members (test), and then solve them individually again (posttest), to ensure that any improvement does not come simply from following other group members. Their performance is compared with those of a control group of participants who take the same tests but always individually. Intellective tasks allow for a direct comparison with results from the individual reasoning literature, and the results are unambiguous. The dominant scheme (Davis 1973) is truth wins, meaning that, as soon as one participant has understood the problem, she will be able to convince the whole group that her solution is correct (Bonner et al. 2002; Laughlin & Ellis 1986; Stasson et al. 1991). This can lead to big improvements in performance. Some experiments using the Wason selection task dramatically illustrate this phenomenon (Moshman & Geil 1998; see also Augustinova 2008; Maciejovsky & Budescu 2007). The Wason selection task is the most widely used task in reasoning, and the performance of participants is generally very poor, hovering around 10% of correct answers (Evans 1989; Evans et al. 1993; Johnson-Laird & Wason 1970). However, when participants had to solve the task in groups, they reached the level of 80% of correct answers.

Several challenges can be leveled against this interpretation of the data. It could be suggested that the person who has the correct solution simply points it out to the others, who immediately accept it without argument, perhaps because they have recognized this person as the “smartest” (Oaksford et al. 1999). The transcripts of the experiments show that this is not the case: Most participants are willing to change their mind only once they have been thoroughly convinced that their initial answer was wrong (e.g., see Moshman & Geil 1998; Trognon 1993). More generally, many experiments have shown that debates are essential to any improvement of performance in group settings (for a review and some new data, see Schulz-Hardt et al. 2006; for similar evidence in the development and education literature, see Mercier, in press b). Moreover, in these contexts, participants decide that someone is smart based on the strength and relevance of her arguments and not the other way around (Littlepage & Mueller 1997). Indeed, it would be very hard to tell who is “smart” in such groups – even if general intelligence were easily perceptible, it correlates only .33 with success in the Wason selection task (Stanovich & West 1998). Finally, in many cases, no single participant had the correct answer to begin with. Several participants may be partly wrong and partly right, but the group will collectively be able to retain only the correct parts and thus converge on the right answer. This leads to the assembly bonus effect, in which the performance of the group is better than that of its best member (Blinder & Morgan 2000; Laughlin et al. 2002; 2003; 2006; Lombardelli et al. 2005; Michaelsen et al. 1989; Sniezek & Henry 1989; Stasson et al. 1991; Tindale & Sheffey 2002). Once again there is a striking convergence here, with the developmental literature showing how groups – even when no member had the correct answer initially – can facilitate learning and comprehension of a wide variety of problems (Mercier, in press b).

According to another counterargument, people are simply more motivated, generally, when they are in groups (Oaksford et al. 1999). This is not so. On the contrary, “the ubiquitous finding across many decades of research (e.g., see Hill 1982; Steiner 1972) is that groups usually fall short of reasonable potential productivity baselines” (Kerr & Tindale 2004, p. 625). Moreover, other types of motivation have no such beneficial effect on reasoning. By and large, monetary incentives, even substantial ones, fail to improve performance in reasoning and decision-making tasks (Ariely et al., 2009; Bonner & Sprinkle 2002; Bonner et al. 2000; Camerer & Hogarth 1999; and, in the specific case of the Wason selection task, see Johnson-Laird & Byrne 2002; Jones & Sugden, 2001). Thus, not any incentive will do: Group settings have a motivational power to which reasoning responds specifically.

The argumentative theory also helps predict what will happen in nonoptimal group settings. If all group members share an opinion, a debate should not arise spontaneously. However, in many experimental and institutional settings (juries, committees), people are forced to discuss, even if they already agree. When all group members agree on a certain view, each of them can find arguments in its favor. These arguments will not be critically examined, let alone refuted, thus providing other group members with additional reasons to hold that view. The result should be a strengthening of the opinions held by the group (for a review, see Sunstein 2002; for a recent illustration, see Hinsz et al. 2008). Contra Sunstein’s law of group polarization, it is important to bear in mind that this result is specific to artificial contexts in which people debate even though they tend to agree in the first place. When group members disagree, discussions often lead to depolarization (Kogan & Wallach 1966; Vinokur & Burnstein 1978). In both cases, the behavior of the group can be predicted on the basis of the direction and strength of the arguments accessible to group members, as demonstrated by research carried out in the framework of the Persuasive Argument Theory (Vinokur 1971), which ties up with the prediction of the present framework (Ebbesen & Bowers 1974; Isenberg 1986; Kaplan & Miller 1977; Madsen 1978).

Hah, the polarization effect explains why I always go into important meetings with sufficient number of allies. But unfortunately that's a way to manipulate the decision making, not to actually make better decisions.

I am 100% in agreement with TheMajor's and Mathisco's comments on power, status, and enforcing cooperation.

I just wanted to comment on these lines:

  • Having leaders suppress their own opinions at the start of deliberations and explicitly express a desire to hear new information
  • Setting cultural norms that encourage critical thinking and productive disagreement

Implementing this requires a greater degree of trust in the honesty and intentions of management, fellow employees, and future outcomes of things like promotion and hiring/firing decisions than I have ever experienced anywhere I've worked, or anywhere most people I know have worked. Even if your boss really does believe this would be a better way to make decisions, doing something different opens up everyone involved to being first in line for scapegoating when something goes wrong, and last in line for promotion if they're seen as a threat to the jobs of the higher-ups. In most meetings I've been in, everyone with any savvy at all knows this, and will only express their own opinions if they're already very secure in their status within a meeting (or if they're relative outsiders or newcomers able to frame opinion or criticism as a question).