Here's our current working plan, the progress we've made so far, and the pieces we still need to hit.
Anecdata notwithstanding, the jury is still out regarding the workshops' usefulness to those who come. During the very first minicamps (the current workshops are agreed to be better) we randomized admission of 15 applicants, with 17 controls. Our study was low-powered and effects on e.g. income would have needed to be very large for us to expect to detect them. Still, we ended up with non-negligible evidence of absence
: income, happiness, and exercise did not visibly trend upward one year later. We detected statistically significant positive impacts on the standard (BFI-10
) survey pair for emotional stability "I see myself as someone who is relaxed, handles stress well" / "I get nervous easily" (p=.002). Also significant were effects on an abridged General Self-Efficacy Scale
(sample item:"I can solve most problems if I invest the necessary effort") (p=.007). The details will be available soon on our blog (including a much larger number of negative results). We'll run another RCT soon, funding permitting.
Epistemic rationality curricula
After a rocky start, our epistemic rationality curriculum has seen a number of recent victories. Our “Building Bayesian Habits” class began performing much better after we figured out how to help people notice their intuitive, “System 1″ expectations of probabilities.
Our "inner simulator" class conveys the distinction between profession and anticipation while aiming at immediate, practical benefits; it isn't about religion and politics, it's about whether your mother will actually enjoy the potted plant you’re thinking of giving her. More generally, the epistemic rationality curriculum appears to be
integrating deeply with the competence curriculum, and appears to be becoming more appealing to participants as it does so. Strengthening this curriculum, and building in real tests of its efficacy, will be a major focus in 2014.
Integrating with academic research
We made preliminary efforts in this direction - for example by taking standard questionnaires from the academic literature, including Stanovich's
indicators of the traits he calls “rationality”, and administering them to attendees at a Less Wrong meetup. (We found that meetup attendees scored near the ceiling, so we'll probably need new questionnaires with better discrimination.) Our research fellow, Dan Keys (whose masters thesis was on heuristics and biases), spends a majority of his time keeping up with the literature and integrating it with CFAR workshops, as well as designing tests for our ongoing forays
into randomized controlled trials. We're particularly excited by Tetlock's Good Judgment Project
, and we'll be piggybacking on it a bit to see if we can get decent ratings.
Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality -- such as a local politician, a police officer, a Spanish teacher, and others -- are by and large quite happy with the workshop and feel it is valuable.
Nevertheless, the total set of people who can travel to a 4.5-day immersive workshop, and who can spend $3900 to do so, is limited. We want to eventually give a substantial skill-boost in a less expensive, more accessible format; we are slowly bootstrapping toward this.
- Shorter workshops: We’re working on shorter versions of our workshops (including three-hour and one-day courses) that can be given to larger sets of people at lower cost.
- College courses: We helped develop a course on rational thinking -- for UC Berkeley undergraduates, in partnership with Nobel Laureate Saul Perlmutter. We also brought several high school and university instructors to our workshop, to help seed early experimentation into their curricula.
- Increasing visibility: We’ve been working on increasing our visibility among the general public, with alumni James Miller and Tim Czech both working on non-fiction books that feature CFAR, and several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal.
In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.
The most powerful interventions are not one-off experiences; rather, they are the start of an ongoing practice. Changing one's social environment is one of the highest impact ways to create personal change
. Alum Paul Crowley writes that “The most valuable lasting thing I got out of attending, I think, is a renewed determination to continually up my game. A big part of that is that the minicamp creates a lasting community of fellow alumni who are also trying for the biggest bite of increased utility they can get, and that’s no accident.”
The goal is to create a community that is directly helpful for its members, and that simultaneously improves its members' impact on the world.
Progress to date
A strong set of seed alumni
We have roughly 350 alumni so far, which include scientists from MIT and Berkeley, college students, engineers from Google and Facebook, founders of Y-combinator startups, teachers, professional writers, and the exceptionally gifted high-school students who participated in SPARC
2013 and 2012. (Not counted in that tally are the 50-some attendees of the 2013 Effective Altruism Summit
, for whom we ran a free, abridged version of our workshop.)
There is an active alumni Google group, which gets daily traffic. Alumni use it to share useful life hacks they’ve discovered, help each other trouble-shoot, and notify each other of upcoming events and opportunities. We’ve also been using our post-workshop parties as reunions for alumni nearby (in the San Francisco Bay area, the New York City area, and -- in two months -- Melbourne, Australia).
In large part thanks to our alumni forum and the post-workshop party networking, there have already been numerous cases of alumni helping each other find jobs and collaborating on startups or other projects. There have also been several alumni recruited to do-gooding projects (e.g., MIRI and Leverage Research have engaged multiple alumni), and of alumni improving their “earn to give” ability or shifting their own do-gooding strategy.
Many alumni also take CFAR skills back to Less Wrong meet-ups or other local communities (for example, the effective-altruism meetup in Melbourne, a homeless youth shelter in Oregon, and a self-improvement group in NYC; many have also practiced in their start-ups and with co-workers (for example, Beeminder, MetaMed, and Aquahug)).
We’d like the alumni community to have an accurate picture of how to effectively improve the world. We don’t want to try to figure out how to improve the world all from scratch. There are already a number of groups who’ve done a lot of good thinking on the subject; including some who call themselves "effective altruists", but also people who call themselves "social entrepreneurs", "x-risk minimizers", and "philanthropic foundations".
We aim to bring in the best thinkers and doers from all of these groups to seed the community with diverse good ideas on the subject. The goal is to create a culture rich enough that the alumni, as a community, can overcome any errors in CFAR’s founders’ perspectives. The goal is also to create a community that is defined by its pursuit of true beliefs, and that is not defined by any particular preconceptions as to what those beliefs are.
We use applicants’ inclination to do good as a major criterion of financial aid. Recipients of our informally-dubbed “altruism scholarships” have included members of the Future of Humanity Institute, CEA, Giving What We Can, MIRI, and Leverage Research. They also include many college or graduate students who have no official EA affiliation, but who are passionate about their desire to devote their career to world-saving (and who hope the workshops can help them figure out how to do so). And they include folks who are working full-time on varied do-gooding projects of broader origin, such as social entrepreneurs, someone working on community policing, and folks working at a major philanthropic foundation.
We'll be running our first international workshop in Australia, in February 2014, thanks to alumni Matt and Andrew Fallshaw.
Also, starting in 2014, we'll be bringing about 20 Estonian math and science award-winners per year to CFAR workshops, thanks to a 5-year pledge from Jaan Tallinn to sponsor workshop spots for leading students from his home country. Estonia is an EU member country with a population of 1.2 million and a high-technology economy, and going forward this might be the first opportunity to check whether there are network effects in relatively larger fractions of a stratum.
Over 2014, a major focus will be improving opportunities for ongoing alumni involvement. If funding allows, we’ll also try our hand at pilot activities for meet-ups.
Specific plans include:
- A two-day "Epistemic Rationality and EA" mini-workshop in January, targeted at alumni
- An alumni reunion this summer (which will be a multi-day event drawing folks our entire worldwide alumni community, unlike the alumni parties at each workshop);
- An alumni directory, as an attempt to increase business and philanthropic partnerships among alumni.
Our fixed expenses come to about $40k per month. In some detail:
- About $7k for our office space
- About $3k for miscellaneous expenses
- About $30k for salary & wages, going forward
- We have five full-time people on salary, each getting $3.5k per month gross. The employer portion of taxes adds roughly an additional $1k/month per employee.
- The remaining $7k or so goes to hourly employees and contractors. We have two roughly full-time hourly employees, and a few contractors who do website adjustment and maintenance, workbook compilation for a workshop, and similarly targeted tasks.
In addition to our fixed expenses, we chose to run SPARC 2013
, even though it would cause us to run out of money right around the end-of-year fundraising drive. We did so because we judged SPARC to be potentially very important
, enough to justify the risk of leaning on this winter fundraiser to continue. All told, SPARC cost approximately $50k in direct costs (not counting staff time).
(We also chose to e.g. teach at the EA Summit, do rationality research, put some effort into curricula that can be delivered cheaply to a larger crowd, etc. These did not incur much direct expense, but did require staff time which could otherwise have been directed towards revenue-producing projects.)
Workshops are our primary source of non-donation income. We ran 7 of them in 2013, and they became increasingly cash-positive through the year. We now expect a full 4-day workshop held in the Bay Area to give us a profit of about $25k (ignoring fixed costs, such as staff time and office rent), which is just under 3 weeks of CFAR runway. Demand isn't yet reliable enough to let us run them at that frequency. We've made significant traction on building interest outside of the Less Wrong community, but there's still work to be done here, and that work will take time. In the meantime, workshops can subsidize some of our non-workshop activities, but not all of them. (Your donations do not go to subsidize workshops!)
We're also actively exploring revenue models other than the four-day workshop. Several of them look promising, but need time to come to fruition before the income they offer us is relevant.
CFAR received $166k in our previous fundraising drive at the start of 2013, and a smaller amount of donations spread across the rest of the year. SPARC was partially sponsored with $15k from Dropbox and $5k from Quixey. These donations subsidized SPARC, the rationality workshop at the EA summit, research and development, and core expenses and salary.
Savings and debt
Right now CFAR has essentially no savings. The savings we accumulated by the end of 2012 went to (a) feeding the gap between income and expenses and (b) funding SPARC.
A $30k loan, which helped us cover core 2013 expenses, comes due in March 2014.
If this winter fundraiser goes well, it will give us time to make some of our current experimental products mature. We think we have an excellent shot at making major strides forward in CFAR's mission as well as becoming much more self-sustaining during 2014.
If this winter fundraiser goes poorly, CFAR will not yet have sufficient funding to continue core operations.
How you can help
Our main goals in 2014:
- Building a scalable revenue base, including via ramping up our workshop quality, workshop variety, and our marketing reach.
- Community-building, including an alumni reunion.
- Creating more connections with the effective altruism community, and other opportunities for our alumni to get involved in do-gooding.
- Research to feed back into our curriculum -- on the effectiveness of particular rationality techniques, as well as the long-term impact of rationality training on meaningful life outcomes.
- Developing more classes on epistemic rationality.
The three most important ways you can help:
If you’re considering donating but want to learn more about how CFAR uses money, or you have other questions or hesitations, let us know -- we’d be more than happy to chat with you via Skype. You can sign up for a one-on-one call with Anna here
We’re actively seeking a new director of operations
to organize our workshops; good operations can be a great multiplier on CFAR’s total ability to get things done. We are continuing to try out exceptional candidates for a curriculum designer
And we always need more volunteers
to help out with alpha-testing new classes in Berkeley, and to participate in online experiments.
We're continually searching for additional awesome people for our workshops. This really is a high-impact way people can help us; and we do have a large amount of data suggesting that (you /your friends) will be glad to have come. You can apply here
-- it takes 1 minute, and leads to a conversation with Anna or Kenzi, which (you'll / they’ll) probably find interesting whether or not they choose to come.
Like the open-source movement, applied rationality will be the product of thousands of individuals’ contributions. The ideas we've come up with so far are only a beginning. If you have other suggestions for people we should meet, other workshops we should attend, ways to branch out from our current business model, or anything else -- get in touch, we’d love to Skype with you.
You can also be a part of open-source applied rationality by creating good content for Less Wrong. Some of our best workshop participants, volunteers, hires, ideas for rationality techniques, use cases, and general inspiration have come from Less Wrong. Help keep the LW community vibrant and growing.
That is: by giving up a dollar, you can, given some simplifications, cause CFAR to gain two dollars. Much thanks to Matt Wage, Peter McCluskey, Benjamin Hoffman, Janos Kramar & Victoria Krakovna, Liron Shapira, Satvik Beri, Kevin Harrington, Jonathan Weissman, and Ted Suzman for together putting up $150k in matching funds. (Matt Wage, as mentioned, promises not only that he will donate if the pledge is matched, but also that he won't
donate the $50k of matching funds to CFAR if the pledge isn't
filled -- so your donation probably really does cause matching at the margin.)
This post was result of a collaborative effort between Anna Salamon, Kenzi Amodei, Julia Galef, and “Valentine” Michael Smith - like many of our endeavors at CFAR, it went through many iterations, in many hands, to create an overall whole where the credit due is difficult to tease apart.
In the broadest sense, CFAR can be seen as a cognitive branch of effective altruism - making a marginal improvement to thinking where thinking matters a lot. MIRI did not gain traction until it began to include explicit rationality in its message - maybe because thinking about AI puts heavy loads on particular cognitive skills, though there are other hypotheses. Other branches of effective altruism may encounter their own problems with a heavy cognitive load. Effective altruism is limited in its growth by the supply of competent people who want to quantify the amount of good they do.
It has been true over the course of human history that improvements in world welfare have often been tied to improvements in explicit thinking skills, most notably with the invention of science. Even for someone who doesn't think that existential risk is the right place to look, trying to invest more in good reasoning, qua good reasoning - doubling down on the huge benefits which explicit cognitive skills have already brought humanity - is a plausible candidate for the highest-impact marginal altruism.
That is, we’ve had two years since our barest beginnings, when Anna, Julia, and Val began working together under the auspices of MIRI; and just over a year as a financially and legally independent organization.
Our pilot minicamps, prior to that October, gave us valuable data/iteration; but they did not pay for their own direct (room and board) costs, let alone for the staff time required.
I’m estimating quality by workshop participants’ feedback, here; it takes many fewer hours now for our instructors to create units that receive the same participant ratings as some older unit that hasn’t been revised (we did this accidental experiment several times). Unsurprisingly, large quantities of unit-design practice, with rapid iteration and feedback, were key to improving our curriculum design skills.
Interestingly, we threw away over a dozen versions of the Bayes class before we developed this one. It has proven somewhat easier to create curricula around strategicness, and around productivity/effectiveness more generally, than around epistemic rationality. The reason for the relative difficulty appears to be two-fold. First, it is somewhat harder to create a felt need for epistemic rationality skills, at least among those who aren’t working on gnarly, data-sparse problems such as existential risk. Second, there is more existing material on strategicness than on epistemic rationality; and it is in general harder to create from scratch than to create with borrowing. Nevertheless, we have, via much iteration, had some significant successes, including Bayes, separating professed beliefs from anticipated ones, and with certain subskills of avoiding motivated cognition (e.g. noticing curiosity; noticing and tuning in to mental flinches). Better yet, there seems to be a pattern to these successes which we are gradually getting the hang of.
We’re excited that Ben Hoffman has pledged $23k of funding specifically to enable us to improve our epistemic rationality curriculum and our research plan.
From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.
More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.
To those who’ve already applied: Thanks very much for applying; and our apologies for not getting back to you so far. If the funding drive is filled (so that we can afford to possibly hire someone new), we’ll be looking through the applications shortly after the drive completes and will get back to you then.