CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

Ben Pace writes:
The “many alumni haven't read the Sequences” part has actually been here since very near the beginning (not the initial 2012 minicamps, but the very first paid workshops of 2013 and later). (CFAR began in Jan 2012.) You can see it in our old end-of-2013 fundraiser post, where we wrote “Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality -- such as a local politician, a police officer, a Spanish teacher, and others -- are by and large quite happy with the workshop and feel it is valuable.” We didn't name this explicitly in that post, but part of the hope was to get the workshops to work for a slightly larger/broader/more cognitively diverse set than the set who for whom the original Sequences in their written form tended to spontaneously "click".
As to the “aren’t part of this shared epistemic framework” -- when I go to e.g. the alumni reunion, I do feel there are basic pieces of this framework at least that I can rely on. For example, even on contentious issues, 95%+ of alumni reunion participants seem to me to be pretty good at remembering that arguments should not be like soldiers, that beliefs are for true things, etc. -- there is to my eyes a very noticable positive difference between the folks at the alumni reunion and unselected-for-rationality smart STEM graduate students, say (though STEM graduate students are also notably more skilled than the general population at this, and though both groups fall short of perfection).
Still, I agree that it would be worthwhile to build more common knowledge and [whatever the “values” analog of common knowledge is called] supporting “a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on” and that are piecewise-checkable (rather than opaque masses of skills that are useful as a mass but hard to build across people and time). This particular piece of culture is harder to teach to folks who are seeking individual utility, because the most obvious payoffs are at the level of the group and of the long-term process rather than at the level of the individual (where the payoffs to e.g. goal-factoring and murphyjitsu are located). It also pays off more in later-stage fields and less in the earliest stages of science within preparadigm fields such as AI safety, where it’s often about shower thoughts and slowly following inarticulate hunches. But still.