From the top-notch 80,000 Hours podcast, and their recent interview with Holden Karnofsky (Executive Director of the Open Philanthropy Project).
What follows is an short analysis of what academia does and doesn't do, followed by a few discussion points by me at the end. I really like this frame, I'll likely use it in conversation in the future.
Robert Wiblin: What things do you think you’ve learned, over the last 11 years of doing this kind of research, about in what situations you can trust expert consensus and in what cases you should think there’s a substantial chance that it’s quite mistaken?
Holden Karnofsky: Sure. I mean I think it’s hard to generalize about this. Sometimes I wish I would write down my model more explicitly. I thought it was cool that Eliezer Yudkowsky did that in his book, Inadequate Equilibria. I think one thing that I especially look for, in terms of when we’re doing philanthropy, is I’m especially interested in the role of academia and what academia is able to do. You could look at corporations, you can understand their incentives. You can look at Governments, you can sort of understand their incentives. You can look at think-tanks, and a lot of them are just like … They’re aimed directly at Governments, in a sense. You can sort of understand what’s going on there.
Academia is the default home for people who really spend all their time thinking about things that are intellectual, that could be important to the world, but that there’s no client who is like, “I need this now for this reason. I’m making you do it.” A lot of the times, when someone says, “Someone should, let’s say, work on AI alignment or work on AI strategy or, for example, evaluate the evidence base for bed nets and deworming, which is what GiveWell does … ” A lot of the time, my first question, when it’s not obvious where else it fits, is would this fit into academia?
This is something where my opinions and my views have evolved a lot, where I used to have this very simplified, “Academia. That’s like this giant set of universities. There’s a whole ton of very smart intellectuals who knows they can do everything. There’s a zillion fields. There’s a literature on everything, as has been written on Marginal Revolution, all that sort of thing.” I really never know when to expect that something was going to be neglected and when it wasn’t, and it takes a giant literature review to figure out which is which.
I would say I’ve definitely evolved on that. I, today, when I think about what academia does, I think it is really set up to push the frontier of knowledge, the vast majority, and I think especially in the harder sciences. I would say the vast majority of what is going on in academic is people are trying to do something novel, interesting, clever, creative, different, new, provocative, that really pushes the boundaries of knowledge forward in a new way. I think that’s really important obviously and great thing. I’m really, incredibly glad we have institutions to do it.
I think there are a whole bunch of other activities that are intellectual, that are challenging, that take a lot of intellectual work and that are incredibly important and that are not that. They have nowhere else to live. No one else can do them. I’m especially interested, and my eyes especially light up, when I see an opportunity to … There’s an intellectual topic, it’s really important to the world but it’s not advancing the frontier of knowledge. It’s more figuring out something in a pragmatic way that is going to inform what decision makers should do, and also there’s no one decision maker asking for it as would be the case with Government or corporations.
To give examples of this, I mean I think GiveWell is the first place where I might have initially expected that there was going to be development economics was going to tell us what the best charities are. Or, at least, tell us what the best interventions are. Tell us is bed nets, deworming, cash transfers, agricultural extension programs, education improvement programs, which ones are helping the most people for the least money. There’s really very little work on this in academia.
A lot of times, there will be one study that tries to estimate the impact of deworming, but very few or no attempts to really replicate it. It’s much more valuable to academics to have a new insight, to show something new about the world then to try and nail something down. It really got brought home to me recently when we were doing our Criminal Justice Reform work and we wanted to check ourselves. We wanted to check this basic assumption that it would be good to have less incarceration in the US.
David Roodman, who is basically the person that I consider the gold standard of a critical evidence reviewer, someone who can really dig on a complicated literature and come up with the answers, he did what, I think, was a really wonderful and really fascinating paper, which is up on our website, where he looked for all the studies on the relationship between incarceration and crime, and what happens if you cut incarceration, do you expect crime to rise, to fall, to stay the same? He really picked them apart. What happened is he found a lot of the best, most prestigious studies and about half of them, he found fatal flaws in when he just tried to replicate them or redo their conclusions.
When he put it all together, he ended up with a different conclusion from what you would get if you just read the abstracts. It was a completely novel piece of work that reviewed this whole evidence base at a level of thoroughness that had never been done before, came out with a conclusion that was different from what you naively would have thought, which concluded his best estimate is that, at current margins, we could cut incarceration and there would be no expected impact on crime. He did all that. Then, he started submitting it to journals. It’s gotten rejected from a large number of journals by now [laughter]. I mean starting with the most prestigious ones and then going to the less.
Robert Wiblin: Why is that?
Holden Karnofsky: Because his paper, it’s really, I think, it’s incredibly well done. It’s incredibly important, but there’s nothing in some sense, in some kind of academic taste sense, there’s nothing new in there. He took a bunch of studies. He redid them. He found that they broke. He found new issues with them, and he found new conclusions. From a policy maker or philanthropist perspective, all very interesting stuff, but did we really find a new method for asserting causality? Did we really find a new insight about how the mind of a perpetrator works? No. We didn’t advance the frontiers of knowledge. We pulled together a bunch of knowledge that we already had, and we synthesized it. I think that’s a common theme is that, I think, our academic institutions were set up a while ago, and they were set up at a time when it seemed like the most valuable thing to do was just to search for the next big insight.
These days, they’ve been around for a while. We’ve got a lot of insights. We’ve got a lot of insights sitting around. We’ve got a lot of studies. I think a lot of the times what we need to do is take the information that’s already available, take the studies that already exist, and synthesize them critically and say, “What does this mean for what we should do? Where we should give money, what policy should be.”
I don’t think there’s any home in academia to do that. I think that creates a lot of the gaps. This also applies to AI timelines where it’s like there’s nothing particularly innovative, groundbreaking, knowledge frontier advancing, creative, clever about just… It’s a question that matters. When can we expect transformative AI and with what probability? It matters, but it’s not a work of frontier advancing intellectual creativity to try to answer it.
A very common theme in a lot of the work we advance is instead of pushing the frontiers of knowledge, take knowledge that’s already out there. Pull it together, critique it, synthesize it and decide what that means for what we should do. Especially, I think, there’s also very little in the way of institutions that are trying to anticipate big intellectual breakthroughs down the road, such as AI, such as other technologies that could change the world. Think about how they could make the world better or worse, and what we can do to prepare for them.
I think historically when academia was set up, we were in a world where it was really hard to predict what the next scientific breakthrough was going to be. It was really hard to predict how it would affect the world, but it usually turned out pretty well. I think for various reasons, the scientific landscape maybe changing now where it’s … I think, in some ways, there are arguments it’s getting easier to see where things are headed. We know more about science. We know more about the ground rules. We know more about what cannot be done. We know more about what probably, eventually can be done.
I think it’s somewhat of a happy coincidence so far that most breakthroughs have been good. To say, I see a breakthrough on the horizon. Is that good or bad? How can we prepare for it? That’s another thing academia is really not set up to do. Academia is set up to get the breakthrough. That is a question I ask myself a lot is here’s an intellectual activity. Why can’t it be done in academia? These days, my answer is if it’s really primarily of interest to a very cosmopolitan philanthropist trying to help the whole future, and there’s no one client and it’s not frontier advancing, then I think that does make it pretty plausible to me that there’s no one doing it. We would love to change that, at least somewhat, by funding what we think is the most important work.
Robert Wiblin: Something that doesn’t quite fit with that is that you do see a lot of practical psychology and nutrition papers that are trying to answer questions that the public have. Usually done very poorly, and you can’t really trust the answers. But, it’s things like, you know, “Does chocolate prevent cancer?” Or, some nonsense … a small sample paper like that. That seems like it’s not pushing forward methodology, it’s just doing an application. How does that fit into to this model?
Holden Karnofsky: Well, I mean, first up, it’s a generalization. So, I’m not gonna say it’s everything. But, I will also say, that stuff is very low prestige.
And, I think it tends … so first off, I mean, A: that work, it’s not the hot thing to work on, and for that reason, I think, correlated with that you see a lot of work that isn’t … it’s not very well funded, it’s not very well executed, it’s not very well done, it doesn’t tell you very much. The vast majority of nutrition studies out there are just … you know, you can look at even a sample report we did on carbs and obesity that Luke Muehlhauser did, it just … these studies are just … if someone had gone after them a little harder with the energy and the funding that we go after some of the fundamental stuff, they could have been a lot more informative.
And then, the other thing is, that I think you will see even less of, is good critical evidence reviews. So, you’ll see a study … so, you’re right, you’ll see a study that’s, you know, “Does chocolate more disease?” Or whatever, and sometimes that study will use established methods, and it’s just another data-point. But, the part about taking what’s out there and synthesizing it all, and saying, “There’s a thousand studies, here are the ones that are worth looking at. Here are their strengths, here are their weaknesses.”
There are literature reviews, but I don’t think they’re a very prestigious thing to do, and I don’t they’re done super great. And so, I think, for example, some of the stuff GiveWell does, it’s like they have to reinvent a lot of this stuff, and they have to do a lot of the critical evidence reviews ’cause they’re not already out there.
The most interesting parts of this to me were:
- Since reading Inadequate Equilibria, I've mostly thought of science through the lens of coordination failures; however this new framing is markedly more positive, which instead of talking about failures talks about the successes (Old: "Academia is the thing that fails to do X" vs New: "Academia is the thing that is good at Y, but only Y"). As well as helping me model academia more fruitfully, I honestly suspect that this framing will be more palatable to people I present it to.
- To state it in my own words: this model of science says the institution is good - not at all kinds of intellectual work, but specifically the subset that is 'discovering new ideas'. This is to be contrasted with synthesis of old ideas into policy recommendations, or replication of published work (for any practical purpose).
- For example within science it is useful to have more data about which assumptions are actually true in a given model, yet I imagine that in this frame, no individual researcher is incentivised to do anything but publish the next new idea, and so nobody does the replications either. (I know, predicting a replication crisis is very novel of me.)
- This equilibria model suggests to me that we're living in a world where the individual who can pick up the most value is not the person coming up with new ideas, but the person who can best turn current knowledge into policy recommendations.
- That is, the 80th percentile person at discovering new ideas will not create as much value as the 50th percentile person at synthesising and understanding a broad swathe of present ideas.
- My favourite example of such a work is Scott's Marijuana: Much More Than You Wanted to Know, which finds that the term that should capture most of the variance in your model (of the effects of legalisation) is how much marijuana affects driving ability.
- Also in this model of science, we should distinguish 'value' from 'competitiveness within academia', which is in fact the very thing you would be trading away in order to do this work.
Some questions for the comments:
- What is the main thing that this model doesn't account for / over counts? That is, what is the big thing this model forgets that science can't do; alternatively, what is the big thing that this model says science can do, that it can't?
- Is the framing about the main place an intellectual can have outsized impact correct? That is, is the marginal researcher who does synthesis of existing knowledge in fact the most valuable, or is it some other kind of researcher?