CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

I'm going to make a general point first, and then respond to some of your specific objections.
General point:
One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.
But there's a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.
And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)
Specific points:
On the face of it, I wouldn't assume that it is reliable, but I don't have that strong a reason to assume that it isn't a priori.
Post priori, my experience being in Circles is that there is sometime incentive to obscure what's happening for you, in a circle, but that, at least with skilled facilitation, there is usually enough trust in the process that that doesn't happen. This is helped by the fact that there are many degrees of freedom in terms of one's response: I might say, "I don't want to share what's happening for me" or "I notice that I don't want to engage with that."
I could be typical minding, but I don't expect most people to lie outright in this context.
That seems like a reasonable hypothesis.
Not sure if it's a crux, in so far as if something works well in circling, you can intentionally import the circling context. That is, if you find that you can in fact transfer intuitions, process fears, track what's motivating a person, etc., effectively in the circling context, an obvious next step might be to try and and do this on topics that you care about, in the circling context. e.g. Circles on X-risk.
In practice it seems to be a little bit of both: I've observed people build skills in circling, that they apply in other contexts, and also their other contexts do become more circling-y.
Sorry, I wasn't really trying to give a full response to your question, just dropping in with a little "here's how I do things."
You're referring to this question?
I expect there's some talking past eachother going on, because this question seems surprising to me.
Um. I don't think there are examples of their output with regard to research or research intuitions. The Circlers aren't trying to do that, even a little. They're a funny subculture that engages a lot with an interpersonal practice, with the goals of fuller understanding of self and deeper connections with others (roughly. I'm not sure that they would agree that those are the goals.)
But they do pass some of my heuristic checks for "something interesting might be happening here." So I might go investigate and see what skill there is over in there, and how I might be able to re-purpose that skill for other goals that I care about.
Sort of like (I don't know) if I was a biologist in an alternative world, and I had an inkling that I could do population simulations on a computer, but I don't know anything about computers. So I go look around and I see who does seem to know about computers. And I find a bunch of hobbyists who are playing with circuits and making very simple video games, and have never had a thought about biology in their lives. I might hang out with these hobbyist and learn about circuits and making simple computer games, so that I can learn skills for making population simulations.
This analogy doesn't quite hold up, because its easier to verify that the hobbyists are actually successfully making computer games, and to verify that their understanding of circuits reflects standard physics. The case of the Circlers is less clean cut, because it is less obvious that they are doing anything real, and because their own models of what they are doing and how are a lot less grounded.
But I think the basic relationship holds up, noting that figuring out which groups of hobbyists are doing real things is much trickier.
Maybe to say it clearly: I don't think it is obvious, or a slam dunk, or definitely the case (and if you don't think so then you must be stupid or misinformed) that "Circling is doing something real." But also, I have heuristics that suggest that Circling is more interesting than a lot of woo.
In terms of evidence that make me think Circling is interesting (which again, I don't expect to be compelling to everyone):
I think all of the above are much weaker evidence than...
or even,
These days, I generally tend to stick to doing things that are concretely and fairly obviously (if only to me) having good immediate effects. If there aren't pretty immediate obvious, effects, then I won't bother much with it. And I don't think circling passes that bar (for me at least). But I do think there are plenty of reasons to be interested in circling, for someone who isn't following that heuristic strongly.
I also want to say, while I'm giving a sort-of-defense of being interested in circling, that I'm, personally, only a little interested.
I've done some ~1000 hours of Circling retreats, for personal reasons rather than research reasons (though admittedly the two are often entangled). I think I learned a few skills, which I could have learned faster, if I knew what I was aiming for. My ability to connect / be present with (some) others, improved a lot. I think I also damaged something psychologically, which took 6 months to repair.
Overall, I concluded it was fine, but I would have done better to train more specific and goal-directed skills like NVC. Personally, I'm more interested in other topics, and other sources of knowledge.