Megaproject management is a new-ish subfield of project management. Originally considered to be the special case of project management where the budgets were enormous (billions of dollars), it is developing into a separate specialization because of the high complexity and tradition of failure among such projects. The driving force behind treating it as a separate field appears to be Bent Flyvbjerg, previously known around here for Reference Class Forecasting as the first person to develop an applied procedure. That procedure was motivated by megaprojects.
I will make a summary of the paper "What you should know about megaprojects, and why: an overview" from 2014. For casual reading, there is an article about it from the New Yorker here.
Megaprojects got their name from the association of mega with big, so think mega-city rather than mega-joule. It did match the unit prefix in the beginning however, as such projects were mostly dams, bridges, or very large buildings in the early 20th century.
The next shift upward took place with the Manhattan Project and then the Apollo program, which are also frequently drawn on as positive examples. The term 'megaproject' picked up steam in the 1970s, at the same time project costs crossed over into the billions.
Currently project costs of 50-100 billion are common, with even larger projects less common but not rare. If you were to view certain things which need dedicated management as a project, like the stimulus packages from 2008 or US defense procurement, then we have crossed over into the trillions and are entering a 'tera era' of megaprojects.
Ignoring these special cases, but counting infrastructure and industries where billion dollar projects are common, megaprojects account for ~8% of global GDP.
These are four reasons which drive the popularity of megaprojects. They are kind of a group bias for each type of stakeholder. They are:
Predictably with biases, there are side effects:
The following characteristics of megaprojects are typically overlooked or glossed over when the four sublimes are at play and the megaproject format is chosen for delivery of large-scale ventures:
1. Megaprojects are inherently risky due to long planning horizons and complex interfaces (Flyvbjerg, 2006).
2. Often projects are led by planners and managers without deep domain experience who keep changing throughout the long project cycles that apply to megaprojects, leaving leadership weak.
3. Decision-making, planning, and management are typically multi-actor processes involving multiple stakeholders, public and private, with conflicting interests (Aaltonen and Kujala, 2010).
4. Technology and designs are often non-standard, leading to "uniqueness bias" amongst planners and managers, who tend to see their projects as singular, which impedes learning from other projects. 3
5. Frequently there is overcommitment to a certain project concept at an early stage, resulting in “lock-in” or “capture,” leaving alternatives analysis weak or absent, and leading to escalated commitment in later stages. "Fail fast" does not apply; "fail slow" does (Cantarelli et al., 2010; Ross and Staw, 1993; Drummond, 1998).
6. Due to the large sums of money involved, principal-agent problems and rent-seeking behavior are common, as is optimism bias (Eisenhardt, 1989; Stiglitz, 1989; Flyvbjerg el al., 2009).
7. The project scope or ambition level will typically change significantly over time.
8. Delivery is a high-risk, stochastic activity, with overexposure to so-called "black swans," i.e., extreme events with massively negative outcomes (Taleb, 2010). Managers tend to ignore this, treating projects as if they exist largely in a deterministic Newtonian world of cause, effect, and control.
9. Statistical evidence shows that such complexity and unplanned events are often unaccounted for, leaving budget and time contingencies inadequate.
10. As a consequence, misinformation about costs, schedules, benefits, and risks is the norm throughout project development and decision-making. The result is cost overruns, delays, and benefit shortfalls that undermine project viability during project implementation and operations.
The Iron Law of Megaprojects
These aren't little, either: cost overruns of 1.5x are common, in bad cases they can run more than 10x, and 90% of projects have them; it is common for projects to have 0.5x or less utilization once complete. This holds for the public and private sectors, and also across countries, so things like excessive regulation or corruption aren't good explanations.
They start off badly, but they do still manage to get completed, which is due to...
Since management of megaprojects doesn't know what they are doing or don't have the incentives to care, inevitably something breaks. Then additional time and money are spent to fix what broke, or the conditions of the project are renegotiated, and it limps along to the next break. This process continues until the project is finished.
If it is so terrible and we know it is terrible, why do we do it this way?
Hirschman's Hiding Hand
Because a lot of important stakeholders don't know how terrible it is. From Willie Brown, former mayor of San Francisco:
"News that the Transbay Terminal is something like $300 million over budget should not come as a shock to anyone. We always knew the initial estimate was way under the real cost. Just like we never had a real cost for the [San Francisco] Central Subway or the [San Francisco-Oakland] Bay Bridge or any other massive construction project. So get off it. In the world of civic projects, the first budget is really just a down payment. If people knew the real cost from the start, nothing would ever be approved. The idea is to get going. Start digging a hole and make it so big, there's no alternative to coming up with the money to fill it in."
Nor are they without justification, for arguments have been made that support it. The first argument is exactly as Willie made it: if we knew how difficult large projects were, we would never build them.
For the second, note the title of the section is hiding, not hidden. This argument was made by Albert O. Hirschman on the basis of earlier work by J.E. Sawyer, and it says that there is an error in both the estimations of costs, and in the estimation of benefits, and this error should roughly cancel out. The problem is that Sawyer's work just pointed out that this was possible based on a picked sample of 5 or so. Hirschman then generalized it into a "Law of the Hiding Hand" and thereby legitimated lying to ourselves.
Alas it is bunk. Aside from being falsified by the actual data, Flyvbjerg points out the non-monetary opportunity costs through the example of the Sydney Opera House. It's architect, Dane Jorn Utzon, won the Pritzker Prize (the Nobel of architecture) in 2003 for the Sydney Opera House. It is his only major work - the catastrophic delays and cost overruns destroyed his career. Contrast with Frank Gehry, another inspired architect, and it looks like management's bungling of the Opera House probably cost us half a dozen gorgeous landmarks.
Survival of the Unfittest
The prevailing attitude that it is perfectly acceptable to lie about and then badly manage megaprojects leads to a weird scenario where worse projects are more likely to be chosen. Consider two competing projects, one with honest and competent management, and one with dishonest and incompetent management. The costs look lower for the latter project, and the benefits look higher, and the choosers between them probably expect them to both be over budget and behind schedule by about the same amount. Therefore we systematically make worse decisions about what projects to build.
Light at the End of the Tunnel
Fortunately there are bright spots. During the Obama administration these failings were identified as an important policy area for the US government. It is now much more common for a megaproject failure to result in consequences for leadership, like the CEOs of BP and Deepwater Horizon, or Airbus and the A380 superjumbo. There are megaprojects that go well and serve as examples of how to do it right, like the Guggenheim Museum in Bilbao. Lastly, there is scattered adoption of varying levels of good practices, like reference class forecasting and independent forecasting.
For those interested in learning more, Oxford has Major Programme Management at the masters level. In books there is the Oxford Handbook of Megaproject Management, and Megaproject Planning and Management: Essential Readings, both from Flyvbjerg. I have read neither, and both are collections of papers - I may just hunt them down independently, for budgetary reasons.
This is my post. It is fundamentally a summary of an overview paper, which I wrote to introduce the concept to the community, and I think it works for that purpose. In terms of improvements there are a few I would make; I would perhaps include the details about why people choose megaprojects as a venue, for completeness' sake. It might have helped if I provided more examples in the post to motivate engagement; these are projects like powerplants, chip fabs, oil rigs and airplanes, or in other words the fundamental blocks of modern civilization.
I continue to think it is an important problem and the subject urgently needs advancement. It seems trivially, overwhelmingly true to me that our systematic badness at big things is a huge issue and the benefits of solving that would be enormous. It is also now topical: the paper references other work which argues in favor of treating megaprojects as a distinct organizational form, and suggests it would be a good method of managing things like stimulus packages or defense procurement. If true, it would trivially be a good method for handling a problem like COVID-19 vaccine procurement and distribution.
The real opportunities here are writing further on the problem. There are a lot of different directions this could (should) go:
- Epistemic spot checks: a lot of the information comes from the business and management literature, which is not held in high regard for rigor. Validating some of the key claims, especially about the scale of megaproject spending overall, would make a good post.
- Book reviews: reviews/summaries of the available books specific to the subject should be doable; there are only a couple of them and it would make it easier for the community to get more information if they needed to.
- Case studies: these would also be book reviews, but this time of individual high profile successes (Apollo) or failures (Big Dig). This would provide a lot more color and context than high-level data like GDP percentages and budget timelines.
- Comparison with established knowledge: For example, comparing the suggested remedies in the literature with techniques taught at CFAR or something like murphyjitsu and planning fallacy, or even basic Bayesian calculation (which according to my reading is basically what they advocate). I have a notion for using the metaphor of a stag hunt to illustrate what some of the problems are.
- Current events: looking at the current state of vaccine procurement and distribution and seeing which problems might have been prevented in a megaproject management format might still provide useful information.
- Investigate actionability: in the paper, we aren't left with any information about what a person could do about this, beyond something maddeningly vague like "be chosen by a government or large corporation to manage a megaproject; don't suck." More details about how the selection process works for jobs like that, and what kind of incentives those people are under, would be important information. This is especially true in terms of evaluating it on an EA basis, where it would fall under Improving Institutional Decision Making.
Turning to the question of whether it is important to LessWrong, it seems to me the answer is clearly no. The post received very little engagement even though it voted well, and the thread wasn't picked up by anyone else.
Curated. This was the first post in awhile that caused me to expand my thinking meaningfully. I had a vague sense that large projects had a bunch of dysfunction. It hadn't occurred to me that really large projects might have different and/or worse systematic dysfunction, and that this might be an important lens through which to view global inadequacy.
I wonder about the suitability of this field as a target for EA careers. An unacceptably high percentage of that ~8% of GDP is wasted, and the picture gets worse when we entertain opportunity costs. Insofar as economic growth in general is good for alleviating suffering, the ability to prevent hundreds of millions of dollars in waste per project seems like a good deal.
The same mechanism occurs in developing countries, which are the traditional place to look for high impact interventions. It seems to me that in places without a lot of other infrastructure built already, and not a lot of capital to invest, the utilization and opportunity cost factors are bigger than they would be otherwise.
The newness of the field strongly suggests it is neglected, although I don't have any sense of how people are chosen to manage this size of project so even if the expertise is neglected it still might be very difficult to apply it because of network effects or the like.
I suspect there would be a high replacement effect, i.e. if we managed to spend less on these big projects, we'd probably just spend the excess on more big projects or be more ambitious on these projects. Many megaprojects are not obviously contributing to increasing welfare on the margin, although perhaps if megaprojects were cheaper we'd be more willing to invest ones that are more about increasing welfare than status (I suspect status is a major player in megaprojects since many of the examples that come to mind are unnecessarily ambitious when simpler, cheaper solutions would have worked but would have been less prestigious).
My intuition is that there wouldn’t be much of a replacement effect, unless you consider different groups being more likely to do megaprojects because they are more successful a replacement effect.
I expect this for a few reasons. First, megaprojects are usually organized according to a specific need, and I would be surprised if a given stakeholder (like a city or a corporation) had a meaningful backlog. Second, the current amount of spending is an accident; I think this a different case to one where they spent much less than they originally planned. Lastly most of this is debt spending, and I feel like organizations don’t go looking for ways to absorb all of their available credit.
It does occur to me that the debt point probably weighs against EA value, because that effectively means the savings are amortized over the length of the financing, and because the same amount won’t necessarily be spent elsewhere it isn’t a direct benefit to anyone.
Ever since this post came out I have had this lingering feeling that megaprojects and their common pathologies is really important for understanding how modern society works. I still haven't figured out why exactly I believe this, but maybe the review process can help me figure it out.
Interesting. One think I would like to see more of mentioned here -- but perhaps will have to dig for myself -- would be about the structure of the project management. It seems one clear characteristic is complexity of the whole. While cost, and overall "size", would clearly be well correlated. However I don't think that is the critical feature. I would perhaps pose it as a separability issue. Can the overall whole be chunked out into bite-sized bits without too much coordination type work or not?
As more of a side thought, I wonder if anyone has done much in the way of spill-over type effects on these mega projects and if any categorization or characteristics are identifiable. We know there have been spillover from both the space program and military programs. Not sure about more commercial or government mega projects. But you would think all infrastructure type projects should benefit from some positive network externality effects.
I don't try to make any "if you built it they will come" argument here. If any it would be a "people are pretty good at figuring out how to make lemonaide from a lemon" type "argument". This kind of goes along with the too complex to manage well cases as well. We often will not know what the end benefits will be for many things -- if the USA didn't do the electrification project to get power to rural communities would we have the same type of communications networks we currently have? Worse? Better???
Of course this is not really about how to better manage such projects and it's likely better managements would allow such aspects greater potentials and lessen any such affects.
I would perhaps pose it as a separability issue. Can the overall whole be chunked out into bite-sized bits without too much coordination type work or not?
My understanding is no, it cannot. What you describe is the basic approach to project management, and the failure of that approach motivates the field. I can think of two specific reasons why:
The first is scale, and I think an intuition similar to Dissolving the Fermi Paradox applies: the question is not the likelihood of each part failing, but rather the likelihood of at least one bottleneck part failing. As the project grows large enough, we should expect to be perpetually choking on one bottleneck or other.
The second is magnitude, which is really the focus of the above paper. Once projects reach a large enough absolute size, more and different stakeholders enter the picture. Each new stakeholder is a stupendous increase in the political complexity of the project, so much so that even at the smaller level of projects where we know the right answers about how to do them applying the right answers is often impossible because of the different interests at play. This is why there is so much effort in keeping the number of stakeholders as small as possible in decision making.
But you would think all infrastructure type projects should benefit from some positive network externality effects.
This is a component of the economic sublime, as I understand it. One example of the kind of stakeholder who enters the picture would be a restaurant owner a block away from the construction site, who expects to benefit from the redirected foot traffic due to construction, or the business of the construction workers, or the increased foot traffic after the project is completed, or all of the above.
As the project grows large enough, we should expect to be perpetually choking on one bottleneck or other.
I did understand that but was suggesting that the criteria as a mega project was really not about the costs -- though fully expect a high costs to be associated with such effort. As you say, they cannot be easily be separated into more manageable sub-projects. Perhaps I can rephrase my though. Is the position that any and every project that costs $X or more necessarily has the type of complexity and non-separability?
If not then the ability to classify high cost projects should be useful -- and point to alternative management requirements if all projects greater than $x still suffer from many of the same inefficiencies.
Each new stakeholder is a stupendous increase in the political complexity of the project, so much so that even at the smaller level of projects where we know the right answers about how to do them applying the right answers is often impossible because of the different interests at play.
Sure, and you run into whole problem of what exactly is the right answer as the different stakeholder are maximizing slightly different (and likely equally legitimate) criteria. That alone is not a bad or wrong thing. But the approach of limiting participation, in a way, seems exactly the same thing as chunking the project into manageable bites. But it's not clear that can be done much better than disassembling the project into smaller, simpler and more manageable sub projects.
If so, limiting the stakeholders then the assessment of the project will always be one of partial failure. That would also drive various type of cost over run and time delays when such excluded stakeholders seek to influence the project from outside the management process.
It's not clear to me that would be the optimal solution to all mega projects.
I think this:
Is the position that any and every project that costs $X or more necessarily has the type of complexity and non-separability?
is a reasonable approximation of Flyvbjerg's position. As you say, it is not really about costs per se; the cost is a heuristic for things that drive complexity and non-separability, while also being the primary metric for success.