CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

I think a crisp summary here is: CFAR is in the business of helping create scientists, more than the business of doing science. Some of the things it makes sense to do to help create scientists look vaguely science-ish, but others don't. And this sometimes causes people to worry (understandably, I think) that CFAR isn't enthused about science, or doesn't understand its value.
But if you're looking to improve a given culture, one natural move is to explore that culture's blindspots. And I think exploring those blindspots is often not going to look like an activity typical of that culture.
An example: there's a particular bug I encounter extremely often at AIRCS workshops, but rarely at other workshops. I don't yet feel like I have a great model of it, but it has something to do with not fully understanding how words have referents at different levels of abstraction. It's the sort of confusion that I think reading A Human's Guide to Words often resolves in people, and which results in people asking questions like:
When I encounter this bug I often point to a nearby tree, and start describing it at different levels of abstraction. The word "tree" refers to a bunch of different related things: to a member of an evolutionarily-related category of organisms, to the general sort of object humans tend to emit the phonemes "tree" to describe, to this particular mid-sized physical object here in front of us, to the particular arrangement of particles that composes the object, etc. And it's sensible to use the term "tree" anyway, as long as you're careful to track which level of abstraction you're referring to with a given proposition—i.e., as long as you're careful to be precise about exactly which map/territory correspondence you're asserting.
This is obvious to most science-minded people. But it's often less obvious that the same procedure, with the same carefulness, is needed to sensibly discuss concepts like "goal" and "good." Just as it doesn't make sense to discuss whether a given tree is "strong" without distinguishing between "in terms of its likelihood of falling over" or "in terms of its molecular bonds," it doesn't make sense to discuss whether a goal is "good" without distinguishing between e.g. "relative to societal consensus" or "relative to your current preferences" or "relative to the preferences you might come to have given more time to think."
This conversation often seems to help resolve the confusion. At some point, I may design a class about this, so that more such confusions can be resolved. But I expect that if I do, some of the engineers in the audience will get nervous, since it will look an awful lot like a philosophy class! (I already get this objection regularly one-on-one). That is, I expect some may wonder whether the AIRCS staff, which claim to be running workshops for engineers, are actually more enthusiastic about philosophy than engineering.
We're not. Academic philosophy, at least, strikes me as an unusually unproductive field with generally poor epistemics. I don't want to turn the engineers into philosophers—I just want to use a particular helpful insight from philosophy to patch a bug which, for whatever reason, seems to commonly afflict AIRCS participants.
CFAR faces this dilemma a lot. For example, we spent a bunch of time circling for a while, and this made many rationalists nervous—was CFAR as an institution, which claimed to be running workshops for science-minded, sequences-reading, law-based-reasoning-enthused rationalists, actually more enthusiastic about woo-laden authentic relating games?
We weren't. But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds. This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally. So we looked to specialists in extraspection for a patch.