Economics has a rich language for talking about market failures. Situations involving things like externalities, asymmetric information, public goods, or principal-agent problems can all result in room for improvement. In practice these situations haven't turned out to be fully tractable, but they're broadly recognized for what they are. We can at least talk theoretically about what's wrong and how to fix it in a market-failure framework, and sometimes we can use the question "how does this deviate from this idealized mechanism (which we believe brings about the best outcomes)?" to guide effective interventions. But not all failures are market failures—not even all failures concerning allocation of goods and services, broadly construed.

Within organizations, we may not expect or want allocation to be governed by price-market mechanisms. For example, firms should do things in-house when they have a strong internal hierarchy and transaction costs are high externally. "Organizational failure" can be a framework for thinking about when that goes wrong due to poor assumptions about the "ideality" of the organization.

And when a system necessarily involves non-hierarchical social (non-market) relationships, especially if there's a continuous search for information and a high need for trust, you want to think about what networks should be doing but aren't.

From The Anatomy of Network Failure (Schrank & Whitford 2011):

When we refer to network failures, we mean failure in a sense that self-consciously parallels what is meant in the literatures on market and organizational failure. In order to define the term, therefore, we begin with, but then tweak, a definition and an approach to definition that has its roots in a seminal 1958 paper (“The Anatomy of Market Failure”) by Francis Bator. “What is it we mean by ‘market failure’?” Bator (1958:351) asked. He answered: “Typically, at least in allocation theory, we mean the failure of a more or less idealized set of price-market institutions to sustain ‘desirable activities’ activities or to estop ‘undesirable’ activities.” His goal was to highlight that a market fails relative to some purpose—often but not necessarily the maximization of efficiency or welfare—and, simultaneously, to direct attention toward variation in the mechanisms driving those outcomes.

The authors continue:

By analogy, what do we mean by network failure? Whereas prices constitute the principal “means of communication” in market relationships, social relations serve a similar function in networks (Powell 1990). By network failure, we therefore mean the failure of a more or less idealized set of relational-network institutions to sustain “desirable” activities or to impede “undesirable” activities. We define the term in this way in order to mimic Bator by highlighting the normative considerations that make the issue of network failure important while simultaneously directing inquiry toward the distinct mechanisms by which network governance is—or is not—sustained.

When do you want to think in terms of market failures? When the conditions for market governance are closer to ideal:

The relative desirability of particular governance modes in particular organizational fields is obviously subject to debate. But the governance literature treats efficiency and innovation as primary criteria of success. We therefore sidestep that debate and accept the transactional conditions that make particular governance mechanisms potentially efficient or effective as the scope conditions for theories of governance failure. For example, Williamson suggests that market governance is particularly functional for the production and distribution of goods that are highly standardized, in which case the number of potential transactors is high, or confront stable demand patterns, where uncertainty is low; an absence of market governance in the presence of those transactional conditions therefore represents a failure, since market governance would have been most efficient.

In terms of organizations?

Similarly, firms are advised to abandon markets for in-house production when their demand for an input is high, the number of available suppliers is low, and the alternative is exposure to “hold up” by opportunistic suppliers who hope to take advantage of their positional power to renegotiate the terms of exchange ex post facto (for reasons we delineate in greater detail below). In these circumstances, a failure to pursue in-house production would in all likelihood constitute an organizational failure (Williamson 1975).

And networks:

By way of contrast, network governance is held functional in organizational fields characterized by a combination of unstable demand and either rapidly changing knowledge or complex interdependencies between component technologies. These characteristics are common to craft-based industries like clothing and construction that serve unstable and highly differentiated demand segments, and that therefore place a premium on flexibility and the rapid reconfiguration or resources; knowledgeintensive industries like biotechnology that confront rapid and unexpected shifts in competencies as well as market conditions; and autos and aerospace, whose final products are complicated and highly integral (Brusoni and Prencipe 2001; SmithDoerr and Powell 2005). In short, we neither need nor want networked organizations to make our office paper or telephones. We may, however, want networked actors to develop our environmentally friendly pulping mills (Kivimaa and Mickwitz 2004) and our next-generation smartphones (Sabel and Saxenian 2008). Only when network governance is simultaneously desirable in light of transactional conditions and absent (or underperforming) should we think of a network as “failing.”

Let's try an example: imagine we're researchers in different branches of a fast-moving field. (For the sake of simplicity, assume we have the same common good as our goal.) Some possible problems:

  • You have information that would be valuable to me, but we've never heard of each other.
  • We do know each other, but you don't know that what you have is valuable to me and I don't know that it exists.
  • You've published information that is valuable to me if true, but I don't know that you're reliable.
  • You've published something that's basically impossible to replicate without key elements being transmitted in person by someone with hands-on experience.
  • We'd make good collaborators, but there's no way to tell—we don’t have a way to directly qualify ourselves to one another.

All the information and means and will is out there—someone with a god's eye view or a mind-reading librarian could make things better for everyone by making the right introductions or linking the right papers. An ideal network in this case is, roughly, the one which solves all these problems as efficiently as possible. Everyone sees all the information they should to the depth that's worth it, they take it just as seriously as they should, they know the people they should know, they trust the people they should trust, all with as small cost as possible to people sharing and qualifying their claims; you can't distribute information or manage trust better without making the increased overall burden not worth it. But the obstacles are very fiddly.

Existing social institutions get us part of the way: academic journals, peer review, and a broad system of "prestige" let us share certain kinds of information with a certain amount of confidence that it's correct and an uncertain signal of attention-worthiness. But since our field is moving so fast, we have a lot of tacit knowledge and unknown unknowns about. (Fast-moving isn't really necessary for this.) Very little has been accessibly codified, so our work isn't much good to outsiders unless they have inroads to our network. And it's hard to establish trust, especially about less formalized stuff, in such a mess.

As with a market failure, it can be a good place for centralized intervention. Especially when it's too expensive to meddle with the market, you might want to consider making the network more efficient. Some science funders have come to see themselves as doing this more as they find themselves without the money to solve market failures by just buying more research. (I'm getting this in large part from accounts of the microelectronics industry as well as some of nanotechnology research, including among others The Long Arm of Moore's Law by Cyrus Mody, which accounts mesh well with my experience on the research edge of those fields.) Attempting to address network failures isn't always effective, but it tends to align better with a sociologically realistic model of how researchers work than the market-intervention perspective does.

Some interventions are going to be directly on incentives keeping individuals from communicating ideally, but sometimes it can be easier to act on network structure/function itself. So, for example, new conferences and professional organizations spurred by those with broader perspective have been effective in getting the right people talking to one another. "Para-scientific media" (short of journals but beyond pop science and more like "trade magazines", e.g. Physics Today) let people know what others are doing broadly even if we wouldn't normally read their papers. Gordon Research Conferences are "off the record" to encourage more open discussion, including sharing of unpublished work. Individual program officers can also have the perspective and connections to be influential here. Flow between academia and industry is an important lever. Various open-science and alt-metrics initiatives can also be viewed in this light, perhaps acting more directly on incentives and having more of a market flavor.

(In a sense, the framework conflates problems related to information distribution and problems related to social relationships by treating information as social. This is intentional, though more applicable in some places than in others. When we share information, it's tagged implicitly or otherwise with things like how reliable [the sharer thinks] it is, how much attention it should be given, and what one is meant to do with it. These are social qualities, and a network of that fails to measure these things out appropriately is failing as a network—at least as badly as one that just doesn't spread enough information around at all.)

Is this just a framework for analyzing failures after the fact, or can it be used to generate new ideas or interventions? I guess it depends what you're trying to apply it to. The more heavily your system relies on distributing information and establishing trust in a way that can't be gotten from prices/markets or hierarchy/authority, the more fruitful this perspective should be. There's already some network, formal or otherwise, governing your system, but it's not ideal; what deviations from [or hidden assumptions about] ideality of that network are bottlenecking its efficiency?

If there's interest, I have a couple more concrete analyses in mind, but my motivation to write this has stalled, and it might be better to get some feedback now anyway. (Or to hear examples of your own, or examples where this is all useless.)

New Comment
1 comment, sorted by Click to highlight new comments since:

Content note: This is a collection/expansion of stuff I've previously posted about elsewhere. I've gathered it here because it's semi-related to Eliezer's recent posts. It's not meant to be a response to the "inadequacy" toolbox or a claim to ownership of any particular idea, but only one more perspective people may find useful as they're thinking about these things.