TBD is a quarterly-ish newsletter about deploying knowledge for impact, learning at scale, and making more thoughtful choices for ourselves and our organizations. This is the third issue, which was originally published in September 2019. Enjoy!  --Ian

What I've Been Reading

The Crisis of Evidence Use
In the last issue of To Be Decided, I highlighted a study that shows most foundations have trouble commissioning evaluations that yield meaningful insights for the field, grantees, or even their own colleagues. A remarkable finding, but you could be forgiven for wondering how much we can conclude from just one study. Sadly (and ironically) there is plenty of evidence to reinforce the point that people with influence over social policy simply don't read or use the vast majority of the knowledge we produce, no matter how relevant it is. What's really astounding about this is how much time, money, attention we spend on evidence-building activities for apparently so little concrete gain. We are either vastly overvaluing or vastly undervaluing evidence. We need to get it right, because those are real resources that could be spent elsewhere, and the world is falling apart around us while we get lost in our spreadsheets and community engagement methodologies.
(Twitter thread | Blog post)

Getting Smarter About Learning from Other People's Research

While there's no one silver bullet for improving evidence use, one thing we can do is get smarter about the ways we construct knowledge from other people's research. Research synthesis is a relatively new and fast-growing scientific methodology. Unlike traditional literature review, research synthesis is hypothesis-driven and treats a body of evidence as a data set, while still leaving room for methodological diversity and qualitative approaches. My understanding of research synthesis is informed by my time working with Harris Cooper, who served on the advisory board for my former think tank Createquity. Dr. Cooper's handbook on research synthesis is an excellent place to start if you want to understand its key principles and learn how to do one yourself.

(Twitter thread)

Decision-Making for Impact: A Guide

How has your life been shaped by the decisions of government officials, donors, and nonprofit executives? If you're anything like me, probably in too many ways to count. I think that’s why we all got into this work in the first place: to make a difference in people’s lives. But it took me a while to realize that the difference we make and the decisions we make are one and the same. Better decisions foster a legacy we can be proud of.

So how can we get better? Lots of ways! Decision-making is an exciting frontier in social impact precisely because there is so much untapped potential to improve how we do things. For decades now, scholars and practitioners have been pioneering methodologies for analyzing and making decisions that have seen barely any adoption in philanthropy, government, or impact investing. Drawing from that body of work, I’ve developed my own process to help myself and my collaborators make critical decisions with intention and focus. Here, I’m sharing the key elements of that process so that you can benefit from them in your own practice.

(Keep reading)

Stuff You Should Know About

  • You probably heard about the Business Roundtable's announcement last month that, from the perspective of nearly 200 CEOs of major American and multinational corporations, the purpose of a corporation is no longer solely to maximize profits for investors but must also include environmental and stakeholder interests. Naturally, the impact investing and CSR communities were all over this, and you can read a roundup of responses and commentary from ValueEdge Advisors and ImpactAlpha. My own hot take: it's a symbolic move, but symbols matter.
  • I write a lot in this newsletter about decision science, but how does decision science dovetail with data science? That question has been a topic of increasing interest in the decision analysis community, and this year's Decision Analysis Affinity Group (DAAG) conference made an explicit attempt to bring these two practitioner communities together. You can read an informative write-up of how that went from Tracy Allison Altman, and the Strategic Decisions Group recently hosted a webinar entitled "Decision Science for Data Scientists" which is available to watch for free.
  • With private foundations and nonprofits increasingly naming equity and racial justice as core to their missions, many are turning to the Equitable Evaluation Initiative to help them live those values in their evaluation practice. As explained in its 2017 framing paper, Equitable Evaluation is grounded in three principles: 1) evaluation and evaluative work should be in the service of equity; 2) evaluation should answer critical questions about historical and cultural context, the effect of strategies on different populations, and systemic drivers of inequity; and 3) evaluative work should be designed and implemented with equity values in mind. The paper identifies a number of "working orthodoxies" to put to rest, including "Grantees and strategies are the evaluand, but not the foundation" and "Credible evidence comes [only] from quantitative data and experimental research."

That's all for now!

If you enjoyed this edition of TBD, please consider forwarding it to a friend. It's easy to sign up here. See you next time!

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 6:08 PM

Thanks for writing these! Perhaps you could make a sequence here on LW so it's easy to navigate to previous and future newsletters.

That's a good idea, I'll have to figure out how to do that! :)