CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

Hello, I am a CFAR contractor who considers nearly all of their job to be “original research into human rationality”. I don’t do the kind of research many people imagine when they hear the word “research” (RCT-style verifiable social science, and such). But I certainly do systematic inquiry and investigation into a subject in order to discover or revise beliefs, theories, applications, etc. Which is, you know, literally the dictionary.com definition of research.
I’m not very good at telling stories about myself, but I’ll attempt to describe what I do during my ordinary working hours anyway.
All of the time, I keep an eye out for things that seem to be missing or off in what I take to be the current art of rationality. Often I look to what I see in the people close to me, who are disproportionately members of rationality-and-EA-related organizations, watching how they solve problems and think through tricky stuff and live their lives. I also look to my colleagues at CFAR, who spend many many hours in dialogue with people who are studying rationality themselves, for the first time or on a continuing basis. But since my eyes are in my own head, I look most for what is absent in my own personal art of rationality.
For example, when I first read the Sequences in 2012 or 2013, I gained a lot, but I also felt a gaping hole in the shape of something like “recognizing those key moments in real-life experience when the rationality stuff you’ve thought so much about comes whizzing by your head at top speed, looking nothing at all like the abstractions you’ve so far considered”. That’s when I started doing stuff like snapping my fingers every time I saw a stop sign, so I could get a handle on what “noticing” even is, and begin to fill in the hole. I came up with a method of hooking intellectual awareness up to immediate experience, then I spent a whole year throwing the method at a whole bunch of real life situations, keeping track of what I observed, revising the method, talking with people about it as they worked with the same problem themselves, and generally trying to figure out the shape of the world around phenomenology and trigger-action planning.
I was an occasional guest instructor with CFAR at the time, and I think that over the course of my investigations, CFAR went from spending very little time on the phenomenological details of key experiences to working that sort of thing into nearly every class. I think it’s now the case that rationality as it currently exists contains an “art of noticing”.
My way of investigating always pushes into what I can’t yet see or grasp or articulate. Thus, it has the unfortunate property of being quite difficult to communicate about directly until the research program is mostly complete. So I can say a lot about my earlier work on noticing, but talking coherently about what exactly CFAR’s been paying me for lately is much harder. It’s all been the same style of research, though, and if I had to give names to my recent research foci, I’d say I’ve been looking into original seeing, some things related to creativity and unconstrained thought, something about learning and what it means to own your education, and experiences related to community and cooperation.
It’s my impression that CFAR has always had several people doing this kind of thing, and that several current CFAR staff members consider it a crucial part of their jobs as well. When I was hired, Tim described research as “the beating heart” of our organization. Nevertheless, I personally would like more of it in future CFAR, and I’d like it to be done with a bit more deliberate institutional support.
That’s why it was my primary focus when working with Eli to design our 2019 instructor training program. The program consisted partially of several weekend workshops, but in my opinion the most important part happened while everyone was at home.
My main goal, especially for the first weekend, was to help the trainees choose a particular area of study. It was to be something in their own rationality that really mattered to them and that they had not yet mastered. When they left the workshop, they were to set off on their own personal quest to figure out that part of the world and advance the art.
This attitude, which we’ve been calling “questing” of late, is the one with which I hope CFAR instructors will approach any class they intend to teach, whether it’s something like “goal factoring” that many people have taught in the past, or something completely new that nobody’s even tried to name yet. When you really get the hang of the questing mentality, you never stop doing original rationality research. So to whatever degree I achieved my goal with instructor training (which everyone seems to think is a surprisingly large degree), CFAR is moving in the direction of more original rationality research, not less.