LessWrong note: I wrote this more in a way slightly more optimized for the EA Forum than LessWrong, because the post seemed slightly more appropriate there. 

Summary

I think it makes sense for Effective Altruists to pursue prioritization research to figure out how best to improve the wisdom and intelligence[1] of humanity. I describe endeavors that would optimize for longtermism, though similar research efforts could make sense for other worldviews.

The Basic Argument

For those interested in increasing humanity’s long-term wisdom and intelligence[1], several types of wildly different interventions are options on the table. For example, we could improve at teaching rationality, or we could make progress on online education. We could make forecasting systems and data platforms. We might even consider something more radical, like brain-computer interfaces or highly advanced pre-AGI AI systems. 

These interventions share many of the same benefits. If we figure out ways to remove people’s cognitive biases, causing them to make better political decisions, that would be similar to the impact of forecasting systems on their political decisions. It seems natural to attempt to figure out how to compare these. We wouldn’t want to invest a lot of resources into one field, to realize 10 years later that we could have spent them better in another. This prioritization is pressing because Effective Altruists are currently scaling up work in several relevant areas (rationality, forecasting, institutional decision making) but mostly ignoring others (brain-computer interfaces, fundamental internet improvements). 

 

The point of this diagram is that all of the various interventions on the left could contribute to helping humanity gain wisdom and intelligence. Different interventions produce other specific benefits as well, but these are more idiosyncratic in comparison. The benefits that come via the intermediate node of wisdom and intelligence can be directly compared between interventions.

 

In addition to caring about prioritization between cause areas, we should also care about estimating the importance of wisdom and intelligence work as a whole. Estimating the importance of wisdom and intelligence gains is crucial for multiple interventions, so it doesn’t make much sense to ask each intervention’s research base to independently tackle this question on their own. Previously I’ve done a lot of thinking about this as part of my work to estimate the value of my own work on forecasting. It felt a bit silly to have to answer this bigger question about wisdom and intelligence, like the bigger question was far outside actual forecasting research.

I think we should consider doing serious prioritization research around wisdom and intelligence for longtermist reasons.[2] This work could both inform us of the cost-effectiveness of all of the available options as a whole, and help us compare directly between different options.

Strong prioritization research between different interventions around wisdom and intelligence might at first seem daunting. There are so clearly many uncertainties and required judgment calls. We don’t even have any good ways of measuring wisdom and intelligence at this point.

However, I think the Effective Altruist and Rationalist communities would prove up to the challenge. GiveWell’s early work drew skepticism for similar reasons.  It took a long time for Quality-Adjusted Life Years to be accepted and adopted, but there’s since been a lot of innovative and educational progress. Now our communities have the experience of hundreds of research person-years of prioritization work. We have at least a dozen domain-specific prioritization projects[3]. Maybe prioritization work in wisdom and intelligence isn’t far off.

List of Potential Interventions

I brainstormed an early list of potential interventions with examples of existing work. I think all of these could be viable candidates for substantial investment.

  • Human/organizational
    • Rationality-related research, marketing, and community building (CFAR, Astral Codex Ten, LessWrong, Julia Galef, Clearer Thinking)
    • Institutional decision making
    • Academic work in philosophy and cognitive science (GPI, FHI)
    • Cognitive bias research (Kahneman and Tversky)
    • Research management and research environments (for example, understanding what made Bell Labs work)
  • Cultural/political
    • Freedom of speech, protections for journalists
    • Liberalism (John Locke, Voltaire, many other intellectuals)
    • Epistemic Security (CSER)
    • Epistemic Institutions
  • Software/quantitative
    • Positive uses of AI for research, pre-AGI (Ought)
    • Tools for thought” (note-taking, scientific software, collaboration)
    • Forecasting platforms (Metaculus, select Rethink Priorities research)
    • Data infrastructure & analysis (Faunalytics, IDInsight)
    • Fundamental improvements in the internet / cryptocurrency
    • Education innovations (MOOCs, YouTube, e-books)
  • Hardware/medical
    • Lifehacking/biomedical (nootropics, antidepressants, air quality improvements, light therapy, quantified self)
    • Genetic modifications (Cloning, Embryo selection)
    • Brain-computer interfaces (Kernel, Neuralink)
    • Digital people (FHI, Age of Em)

Key Claims

To summarize and clarify, here are a few claims that I believe. I’d appreciate insightful pushback for those who are skeptical of any.

  1. “Wisdom and intelligence” (or something very similar) is a meaningful and helpful category.
  2. Prioritization research can meaningfully compare different wisdom and intelligence interventions.
  3. Wisdom and intelligence prioritization research is likely tractable, though challenging. It’s not dramatically more difficult than global health or existential risk prioritization.
  4. Little of this prioritization work has been done so far, especially publicly.
  5. Wisdom and intelligence interventions are promising enough to justify significant work in prioritization.

Open Questions

This post is short, and of course, leaves open a bunch of questions. For example,

  1. Does “wisdom and intelligence” really represent a tractable idea to organize prioritization research around? What other options might be superior?
  2. Would wisdom and intelligence prioritization efforts face any unusual challenges or opportunities? (This would help us craft these efforts accordingly.)
  3. What specific research directions might wisdom and intelligence prioritization work investigate? For example, it could be vital to understand how to quantify group wisdom and intelligence.
  4. How might Effective Altruists prioritize this sort of research? Or, how would it rank on the ITN framework?
  5. How promising should we expect the best identifiable interventions in wisdom and intelligence to be? (This related to the previous question)

I intend to write about some of these later. But, for now, I’d like to allow others to think about them without anchoring.

There’s some existing work advocating for broad interventions in wisdom and intelligence, and there’s existing work on the effectiveness of particular interventions. I’m not familiar with existing research in inter-cause prioritization (please message me if you know of such work).

Select discussion includes, or can be found by searching for:

 

Thanks to Edo Arad, Miranda Dixon-Luinenburg, Nuño Sempere, Stefan Schubert, Brendon Wong for comments and suggestions.


[1]: What do I mean by “wisdom and intelligence”? I expect this to roughly be intuitive to some readers, especially with the attached diagram and list of example interventions. The important cluster I’m going for is something like “the overlapping benefits that would  come from the listed interventions.” I expect this to look like some combination of calibration, accuracy on key beliefs, the ability to efficiently and effectively do intellectual work, and knowledge about important things. It’s a cluster that’s arguably a subset of “optimization power” or “productivity.” I might spend more time addressing this definition in future posts, but thought such a discussion would be too dry and technical for this one. All that said, I’m really not sure about this, and hope that further research will reveal better terminology. 

[2]: Longtermists would likely have a higher discount rate than others. This would allow for more investigation of long-term wisdom and intelligence interventions. I think non-longtermist prioritization in these areas could be valuable but would be highly constrained by the discount rates involved. I don’t particularly care about the question of “should we have one prioritization project that tries to separately optimize for longtermist and nonlongtermist theories, or should we have separate prioritization projects?”

[3]: GiveWell, Open Philanthropy (in particular, subgroups focused on specific cause areas), Animal Charity Evaluators, Giving Green, Organization for the Prevention of Intense Suffering (OPIS), Wild Animal Initiative, and more.

New Comment
8 comments, sorted by Click to highlight new comments since:

In general I think this is a promising area of research, not just for prioritization, but also for recognition that it is indeed an EA cause area. In fact, because in most respects a lot of this research is quite nascent, it's not clear to me that cause prioritization in the classic sense makes a ton of sense over simply running small experiments in these different areas and seeing what we learn. I expect that the value of information is high enough for most of the things you suggested that running say 15 grant experiments each costing $5,000 - $15, 000 is a more cost effective intervention in terms of giving us data than a traditional cost effectiveness analysis (although, likely, the actually most effective thing to do is to combine these two together feedback loop style).

That's an interesting perspective. It does already assume some prioritization though. Such experimentation can only really be done in a very few of the intervention areas. 

I like the idea, but am not convinced of the benefit of this path forward, compared to other approaches. We already have had a lot of experiments in this area, many of which cost a lot more than $15,000; marginal exciting ones aren't obvious to me.

But I'd be up for more research to decide if things like that are the best way forward :)

But I'd be up for more research to decide if things like that are the best way forward :)

 

And I'd be up for more experiments to see if this is a better way forward.

When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups. So at the very least I would use a different word for that, though I'm not sure which one. I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.

When I think about some of humanities greatest advances in this area, I think of things like probability theory and causal inference and expected values - things that I associate with academic departments of mathematics and economics (and not philosophy). This makes me wonder how nascent this really is?

When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups.

I tried to make it clear that I was referring to groups with the phrase, "of humanity", as in, "as a whole", but I could see how that could be confusing. 

the wisdom and intelligence[1] of humanity

 

For those interested in increasing humanity’s long-term wisdom and intelligence[1]


I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.

I imagine there's a lot of overlap. I'd also be fine with multiple prioritization research projects, but think it's early to decide that. 

This makes me wonder how nascent this really is?

I'm not arguing that people haven't made successes in the entire field (I think there's been a ton of progress over the last few hundred years, and that's terrific). I would argue though that there's very little formal prioritization of such progress. Similar to how EA has helped formalize the prioritization of global health and longtermism, we have yet to have similar efforts for "humanity's wisdom and intelligence". 

I think that there are likely still strong marginal gains in at least some of the intervention areas.

[-][anonymous]30

FYI, the link at the top of the post isn't working for me.

Fixed it. Looks like it was going to the edit-form version of the post on the EA Forum, which of course nobody but Ozzie has permission to see.

Ah, thanks!