CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.
(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)
Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address. Relatedly, you said:
But are they masters?
Note the structure of your argument (which structure I have seen repeated quite a few times, in discussions of this and related topics, including in other sub-threads on this post). It goes like this:
There is a process P which purports to output X.
On the basis of various considerations, I expect that process P does indeed output X, and indeed that process P is very good at outputting X.
…
I now conclude that process P does output X, and does so quite well.
Having thus concluded, I will now adopt process P (since I want X).
But there’s a step missing, you see. Step 3 should be:
So, in this case, you have marshaled certain considerations—
—and on the basis of this thinking, reasoning, imagining, and seeming, have concluded, apparently, that people who’ve done a lot of Circling are “masters” in the domain of having “good intuitions about what other people are feeling at any given moment”.
But… are they? Have you checked?
Where is the evidence?