CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

I feel like one of the most valuable things we have on LessWrong is a broad, shared epistemic framework, ideas with which we can take steps through concept-space together and reach important conclusions more efficiently than other intellectual spheres e.g. ideas about decision theory, ideas about overcoming coordination problems, etc. I believe all of the founding staff of CFAR had read the sequences and were versed in things like what it means to ask where you got your bits of evidence from, that correctly updating on the evidence has a formal meaning, and had absorbed a model of Eliezer's law-based approach to reasoning about your mind and the world.
In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have an understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on. I also have felt this way increasingly about CFAR staff over the years (e.g. it's not clear to me whether all current CFAR staff have read The Sequences). And to be clear, I think if you don't have a shared epistemic framework, you often just can't talk to each other very well about things that aren't highly empirical, certainly not at the scale of more than like 10-20 people.
So I've been pretty confused by why Anna and other staff haven't seemed to think this is very important when designing the intellectual environment at CFAR events. I'm interested to know how you think about this?
I certainly think a lot of valuable introspection and modelling work still happens at CFAR events, I know I personally find it useful, and I think that e.g. CFAR has done a good job in stealing useful things from the circling people (I wrote about my positive experiences circling here). But my sense for a number of the attendees is that even if they keep introspecting and finding out valuable things about themselves, 5 years from now they will not have anything to add to our collective knowledge-base (e.g. by writing a LW sequence that LWers can understand and get value from), even to a LW audience who considers all bayesian evidence admissible even if it's weird or unusual, because they were never trying to think in a way that could be communicated in that fashion. The Gwerns and the Wei Dais and the Scott Alexanders of the world won't have learned anything from CFAR's exploration.
As an example of this, Val (who was a cofounder but doesn't work at CFAR any more) seemed genuinely confused when Oli asked for third-party verifiable evidence for the success of Val's ideas about introspection. Oli explained that there was a lemons problem (i.e. information asymmetry) when Val claimed that a mental technique has changed his life radically, when all of the evidence he offers is of the kind "I feel so much better" and "my relationships have massively improved" and so on. (See Scott's Review of All Therapy Books for more of what I mean here, though I think this is a pretty standard idea.) He seemed genuinely confused why Oli was asking for third-party verifiable evidence, and seemed genuinely surprised that claims like "This last September, I experienced enlightenment. I mean to share this as a simple fact to set context" would be met with a straight "I don't believe you." This was really worrying to me, and it's always been surprising to me that this part of him fit naturally into CFAR's environment and that CFAR's natural antibodies weren't kicking against it hard.
To be clear, I think several of Val's posts in that sequence were pretty great (e.g. The Intelligent Social Web is up for the 2018 LW review, and you can see Jacob Falkovich's review on how the post changed his life), and I've personally had some very valuable experiences with Val at CFAR events, but I expect, had he continued in this vein at CFAR, that over time Val would just stop being able to communicate with LWers, and drift into his own closed epistemic bubble, and to a substantial degree pull CFAR with him. I feel similarly about many attendees at CFAR events, although fewer since Val left. I never talked to Pete Michaud very much, and while I think he seemed quite emotionally mature (I mean that sincerely) he seemed primarily interested in things to do with authentic relating and circling, and again I didn't get many signs that he understood why building explicit models or a communal record of insights and ideas was important, and because of this it was really weird to me that he was executive director for a few years.
To put it another way, I feel like CFAR has in some ways given up on the goals of science, and moved toward the goals of a private business, whereby you do some really valuable things yourself when you're around, and create a lot of value, but all the knowledge you gain about about building a company, about your market, about markets in general, and more, isn't very communicable, and isn't passed on in the public record for other people to build on (e.g. see the difference between how all scientists are in a race to be first to add their ideas to the public domain, whereas Apple primarily makes everyone sign NDAs and not let out any information other than releasing their actual products, and I expect Apple will take most of their insights to the grave).
Coming back to this, I think I would describe it as "they seemed like they were actually paying attention", which was so unusual as to be noteworthy.