Suppose you are Bob Steele, structural engineer extraordinaire, and you’ve recently completed your doctorate thesis in advanced bridge aerodynamics. You see how a new generation of bridge technology could significantly improve human welfare. Bridges are not as direct as bed nets or cash transfers, but improved transport infrastructure in developing regions boosts economic productivity flowing through to healthcare, education, and other life-improving services. There’s no time to waste. You found Bridgr.io, put the hard in hardware startup, and get to work bringing your revolutionary technologies to the world.
Common advice is that startups should have a few core metrics which capture their goals, help them track their progress, and ensure they stay focused. For Bridgr.io that reasonably might be revenue, clients, and number of bridges built. There is a danger in this, however.
Although Bridgr.io’s ultimate goal is to have built bridges in the right place, the most pressing tasks are not construction tasks. They’re research tasks. Refining the designs and construction process. Until Bridgr.io hits on a design which works and can be scaled, there is no point sourcing steel and construction workers for a thousand bridges. The first step should be building a sufficient number of test and prototype-bridges (or simulations) not with goal that these bridges will transport anyone, just with the goal of learning.
Phase 1: Figure what to do and how to do it.
Phase 2: Do it.
It’s true that if Bridgr.io tries to build as many bridges as possible as quickly as possible that they will learn along the way what works and what doesn’t, that R&D will automatically happen. But I claim that this kind of learning that happens as a product of trying to do the thing (prematurely) is often inefficient, ineffective, and possibly lethal to your venture.
Superficially, building bridges to have bridges and building bridges to figure out which bridges to build both involve building bridges. Yet in the details they diverge. If you’re trying to do the thing, you often spend your time trying to mobilize enough resources for the all-out effort. You throw everything you’ve got at it, because that’s what it would take to build a thousand bridges all over the globe. Acting to learn is different. Rather than scale, it’s about taking carefully-selected, targeted actions to reduce uncertainty. You don’t seek a contract for fifty bridges, instead you focus on building three crazy different designs to help you test your assumptions.
Among other things, value of information can decline rapidly with scale. If you can build five bridges, as far as the fundamentals go, you can build fifty. And scaling your current process doesn’t necessarily test the uncertainty which matters. Perhaps building fifty bridges in the United States doesn’t test the viability of building them in Central Africa. If you were building to learn, you’d build a couple here and a couple there.
The mistake I see, and the motivation for this post, is many people skipping over the learning phase, or trying to smush it into the actual doing. They seek to maximize their metrics now rather than first investing figuring out what is they really should be doing. What will work at all. The mistake is always operating with a doing-intention when really a learning-intention is needed first.
You’re building a bridge because you want a bridge. You want a physical outcome in the world. You’re doing the actual thing.
You’re building a bridge because you’re trying to understand bridges better. It’s true that ultimately you want an actual physical bridge, but this bridge isn’t for that. This bridge is just about gaining information about what doesn't fall down.
In the context of Effective Altruism
I have some concern that this error is common among those doing directly altruistic work. If, like Bob Steele, you believe that your intervention could be helping people right now, then it’s tempting to want to ramp up production and just do the good thing. Every delay might result in the loss of lives. When the work is very real, it’s hard to step back and treat it like an abstract information problem. (Possibly the pressures are no weaker in the startup world, but that realm might benefit from stronger cultural wisdom exhorting people not scale before they have “product-market fit.")
Possible causes of this error-mode
Why do people make this class of mistake? A few guesses:
- The pressure to present results now. Donors, funders, and employees especially want to see something for time and money invested.
- The dislike of uncertainty. It’s more comfortable to decide to fully run with plausibly good Plan A, whose likelihoods of success you can trump up, than stay in limbo while you test Plans A, B, and C.
- The underestimation of how much uncertainty remains even after early evidence suggests a plan or direction might be a good idea. As an example, a company I once worked for spent over a year pursuing a misguided strategy because using it they landed one large deal with what turned out to be an atypical client.
- Although people have the notion of an experimental mindset and value of information, there’s a failure to adopt an experimental/research mindset if certainty is above a certain level. People think of conducting experiments when they don't know whether something will work at all, but not when the overall picture looks promising and what remains is implementation details. For instance, if I have a program to distribute bed nets, I might have 75% credence this will do a lot of good, even if I’m uncertain about just how much good, what my opportunity costs are, and the true best way to implement. At the point of 75% confidence (or much less), I might stop thinking my program as experimental and fall into a maximizing, doing-intention. Show everyone them big results.
- This is lethal if your goals are extremely long-term with minimal feedback, e.g. long-termist effective altruists. There will be many plausibly good things to do, but if you scale up prematurely by turning your experiments into all-out interventions, then you might either miss far greater opportunities or fail to implement your intervention in a way that works at all on the long-term scale.
- Community-feedback can also push in the wrong direction. People looking in from the outside into an EA project will approve of efforts to do good backed by a decent plausibility story for effectiveness. After that, scale and certainty are probably perceived as more impressive than an array of small-scale experiments and a list of uncertainties.
Final caveat: the perils of Learning-Intention
As much as I'm advocating for them here, there are of course a great many perils associated with learning-intentions too. Learning can easily become divorced from real-world goals and picking the right actions to learn the information you actually need is no small challenge. Faced between a choice between degenerate doing-intention and degenerate learning-intention, I think I would pick the former given is more likely to have empiricism on its side.