We at Intentional Insights​, the nonprofit devoted to promoting rationality and effective altruism  to a broad audience, are finalizing our Theory of Change (a ToC is meant to convey our goals, assumptions, methods, and metrics). Since there's recently been extensive discussion on LessWrong of our approaches to promoting rationality and effective altruism to a broad audience, one that was quite helpful for helping us update, I'd like to share our Theory of Change with you and ask for your feedback.


Here's the Executive Summary:

  • The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing.
  • To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice.
  • We assume that:
    • some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
    • problematic decision making undermines mutual flourishing in a number of life areas.
    • these flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
    • we can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment.
  • Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing.
  • Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations.
  • Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the full version.


I'd appreciate any feedback on the full version from fellow Less Wrongers, on things like content, concepts, structure, style, grammar, etc. I look forward to updating the organization's goals, assumptions, methods, and metrics based on your thoughts. Thanks!

New Comment
5 comments, sorted by Click to highlight new comments since:

What you're doing is admirable, at least superficially because I've only looked at a portion of your posts here.

My main question is: how many people will get past the metaphysical "great barrier"? There is a reason the people here were drawn to LW in the first place. That is the "great barrier". There's also a reason (which I don't think is much different) that some people are drawn to professions where rationality can be a boon. I've recently read some of the early chapters of GEB, and I think the figure and ground idea can illustrate this very good. Why are some people a theorem (figure) and why others are not (ground)? Why do some people pass (theorem) the "great barrier" and why others (nontheorem) do not?

Great question!

First, want to be clear that our goals, as I described earlier, are not to get people to Less Wrong necessarily. There are dangers of Endless September if we do that. Our primary goal is to spread rationality ideas to a broad audience. Doing so does not necessarily involve overcoming the "great barrier," but involves couching rationality in the language of science-based self-improvement, as I do in this article, shared over 1K times.

This gets at the broader point - I think that rationality is a spectrum, in line with Keith Stanovich's research. So our aim is to raise the rationality IQ of the population. The metaphor of "great barrier" is thus not in line with the actual research on rationality and how it functions.

Now, getting to the question of Less Wrong. What we aim to do is gradually move people up the level of complexity, and eventually have some who have chosen to move up this level engage with Less Wrong. We don't assume that all or even most or even 10% will do so, but some will. In fact, some already have started to engage with Less Wrong, reading the Sequences, etc. This is only after they have received adequate training to help them cross the inference gap.

Far from all people are interested in this level of high-brow engagement, and that's ok! As long as we raise the rationality IQ - the sanity waterline - we're doing what we set out to do.

The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing.

I'm not convinced there's enough high-quality research to guide people's decisions. Unfortunately (?), this means people have to fall back on logic, anecdotes, and personal experimentation, but if that's the best that's available, it's the rational choice.

I'm convinced that the research on decision-making we have available, as summarized here and in other places, is better than logic, anecdotes, and personal experimentation.

As a member of the Society for Judgment and Decision Making, I'm glad that more and more research is being produced. If you'll pardon a little plug, I'm especially proud that the society makes its journal freely available, and that all people who are capable of reading such high-level content - which would include pretty much all active Less Wrongers - can access it without a paywall. I think these articles, and the book I linked to above, illustrate the availability of high-quality research. What Intentional Insights tries to do is take such content and spread it to a broad audience.

Now, is the research perfect? Nope. We're just at the start of exploring how our brains work, and there's plenty of reasons to be pessimistic. However, what I resonate with is trying to take what we do know, and try to popularize and improve it, so that we can use what we can to become the best we can be, within the limits of what we know.