Cross-post from EA Forum, follow-up to EA needs consultancies.
Below is a list of features that make a report on some research question more helpful to me, along with a list of examples.
I wrote this post for the benefit of individuals and organizations from whom I might commission reports on specific research questions, but others might find it useful as well. Much of what's below is probably true for Open Philanthropy in general, but I've written it in my own voice so that I don't need to try to represent Open Philanthropy as a whole.
For many projects, some of the features below are not applicable, or not feasible, or (most often) not worth the cost, especially time-cost. But if present, these features make a report more helpful and action-informing to me:
The strongest forms of evidence available on the question were generated/collected. This is central but often highly constrained, e.g. we generally can't run randomized trials in geopolitics, and major companies won't share much proprietary data. But before commissioning a report, I'd typically want to know what the strongest evidence that could in theory be collected is, and how much that might cost to gather or produce.
Thoughtful cost-benefit analysis, where relevant.
Strong reasoning transparency throughout, of this particular type. In most cases this might be the most important feature I'm looking for, especially given that many research questions don't lend themselves to more than 1-3 types of evidence anyway, and all of them are weak. In many cases, especially when I don't have much prior context and trust built up with the producers of a report, I would like to pay for a report to be pretty "extreme" about reasoning transparency, e.g. possibly:
Authors and other major contributors who have undergone special training in calibration and forecasting,[2] e.g. from Hubbard and Good Judgment. This should help contributors to a report to "speak our language" of calibrated probabilities and general Bayesianism, and perhaps improve the accuracy/calibration of the claims in the report itself. I'm typically happy to pay for this training for people working on a project I've commissioned.
External reviews of the ~final report, including possibly from experts with different relevant specializations and differing/opposed object-level views. This should be fairly straightforward with sufficient honoraria for reviewers, and sufficient time spent identifying appropriate experts.
Some of the strongest examples of ideal reports of this type that I've seen are:
GiveWell's intervention/program reports[3] and top charity reviews.[4]
David Roodman's evidence reviews, e.g. on microfinance, alcohol taxes, and the effects of incarceration on crime (most of these were written for Open Philanthropy).
Other examples include: