CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

Re: 1—“Forked codebases that have a lot in common but are somewhat tricky to merge” seems like a pretty good metaphor to me.
The question I'd like to answer that is near your questions is: "What is the minimal patch/bridge that will let us use all of both codebases without running into merge conflicts?"
We do have a candidate answer to this question, which we’ve been trying out at AIRCS to reasonable effect. Our candidate answer is something like: an explicit distinction between “tacit knowledge” (inarticulate hunches, early-stage research intuitions, the stuff people access and see in one another while circling, etc.) and the “explicit” (“knowledge” worthy of the name, as in the LW codebase—the thing I believe Ben Pace is mainly gesturing at in his comment above).
Here’s how we explain it at AIRCS:
In the context of the AIRCS workshop, we share “The Tacit and the Explicit” in order to avoid two different kinds of errors:
We find ourselves needing to fix both those errors in order to allow people to attempt grounded original thinking about AI safety. They need to be able to have intuitions, and take those intuitions seriously enough to develop them / test them / let them breathe, without mistaking those intuitions for knowledge.
So, at the AIRCS workshop, we introduce the explicit (which is a big part of what I take Ben Pace to be gesturing at above actually) at the same time that we introduce the tacit (which is the thing that Ben Pace describes benefiting from at CFAR IMO). And we introduce a framework to try to keep them separate so that learning cognitive processes that help with the tacit will not accidentally mess with folks’ explicit, nor vice versa. (We’ve been introducing this framework at AIRCS for about a year, and I do think it’s been helpful. I think it’s getting to the point where we could try writing it up for LW—i.e., putting the framework more fully into the explicit.)