CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

More about why CFAR would be non-functional if it weren’t dogfooding:
As I said, my thoughts aren’t really in such a state that I know how to communicate them coherently. But I’ve often found that going ahead and communicating incoherently can nevertheless be valuable; it lets people’s implicit models interact more rapidly (both between people and within individuals), which can lead to developing explicit models that would otherwise have remained silent.
So, when I find myself in this position, I often throw a creative prompt to the part of my brain that thinks it knows something, and don’t bother trying to be coherent, just to start to draw out the shape of a thing. For example, if CFAR were a boat, what sort of boat would it be?
If CFAR were a boat, it would be a collection of driftwood bound together with twine. Each piece of driftwood was yanked from the shore in passing when the boat managed to get close enough for someone to pull it in. The riders of the boat are constantly re-organizing the driftwood (while standing on it), discarding parts (both deliberately and accidentally), and trying out variations on rudders and oars and sails. All the while, the boat is approaching a waterfall, and in fact the riders are not trying to make a boat at all, but rather an airplane.
The CFAR techniques are first of all the driftwood pieces themselves, and are also ways of balancing atop something with no rigid structure, of noticing when the raft is taking on water, of coordinating about which bits of driftwood ought to be tied to which other bits, and of continuing to try to build a plane when you’d rather forget the waterfall and go for a swim.
Which, if I had to guess, is an impressionistic painting depicting my concepts around an organization that wants to bootstrap an entire community into equalling the maybe impossible task of thinking well enough to survive x-risk.
This need to quickly bootstrap patterns of thought and feeling, not just of individual humans but of far-flung assortments of people, is what makes CFAR’s problem so hard, and its meager success thus far so impressive to me. It doesn’t have the tools it needs to efficiently and reliably accomplish the day-to-day tasks of navigation and not sinking and so forth, so it tries to build them by whatever means it can manage in any given moment.
It’s a shitty boat, and an even shittier plane. But if everyone on it were just passively riding the current, rather than constantly trying to build the plane and fly, the whole thing would sink well before it reached the waterfall.