CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).

Topics that may be interesting include (but are not limited to):

  • Why we think there should be a CFAR;
  • Whether we should change our name to be less general;
  • How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
  • How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.

Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!

(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)

[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

We run the Center for Applied Rationality, AMA
New Comment
324 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I feel like one of the most valuable things we have on LessWrong is a broad, shared epistemic framework, ideas with which we can take steps through concept-space together and reach important conclusions more efficiently than other intellectual spheres e.g. ideas about decision theory, ideas about overcoming coordination problems, etc. I believe all of the founding staff of CFAR had read the sequences and were versed in things like what it means to ask where you got your bits of evidence from, that correctly updating on the evidence has a formal meaning, and had absorbed a model of Eliezer's law-based approach to reasoning about your mind and the world.

In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have an understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on. I also have felt this way increasingly about CFAR staff over the years (e.g. it's not clear to me whether all current CFAR staff have read The Sequen

... (read more)

This is my favorite question of the AMA so far (I said something similar aloud when I first read it, which was before it got upvoted quite this highly, as did a couple of other staff members). The things I personally appreciate about your question are: (1) it points near a core direction that CFAR has already been intending to try moving toward this year (and probably across near-subsequent years; one year will not be sufficient); and (2) I think you asking it publicly in this way (and giving us an opportunity to make this intention memorable and clear to ourselves, and to parts of the community that may help us remember) will help at least some with our moving there.

Relatedly, I like the way you lay out the concepts.

Your essay (I mean, “question”) is rather long, and has a lot of things in it; and my desired response sure also has a lot of things in it. So I’m going to let myself reply via many separate discrete small comments because that’s easier.

(So: many discrete small comments upcoming.)

Ben Pace writes:

In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have any understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on.

The “many alumni haven't read the Sequences” part has actually been here since very near the beginning (not the initial 2012 minicamps, but the very first paid workshops of 2013 and later). (CFAR began in Jan 2012.) You can see it in our old end-of-2013 fundraiser post, where we wrote “Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality -- such as a local politician, a police officer, a Spanish teacher, and others -- are by and large quite happy with the workshop and feel it is valuable.” We didn't name this explicitly in that post, but part of the hope was to get the workshops to work for a slightly larger/broader/more cognitively diverse set than t

... (read more)
that CFAR's natural antibodies weren't kicking against it hard.

Some of them were. This was a point of contention in internal culture discussions for quite a while.

(I am not currently a CFAR staff member, and cannot speak to any of the org's goals or development since roughly October 2018, but I can speak with authority about things that took place from October 2015 up until my departure at that time.)

Yeah, I predict that if one showed Val or Pete the line about fitting naturally into CFAR’s environment without triggering antibodies, they would laugh hard and despairingly. There was definitely friction.

What did this friction lead to (what changes in CFAR’s output, etc.)?

Ben Pace writes:

“... The Gwerns and the Wei Dais and the Scott Alexanders of the world won't have learned anything from CFAR's exploration.”

I’d like to distinguish two things:

  1. Whether the official work activities CFAR staff are paid for will directly produce explicit knowledge in the manner valued by the Gwern etc.
  2. Whether that CFAR work will help educate people who later produce explicit knowledge themselves in the manner valued by Gwern etc., and who wouldn't have produced that knowledge otherwise.

#1 would be useful but isn’t our primary goal (though I think we’ve done more than none of it). #2 seems like a component of our primary goal to me (“scientists” or “producers of folks who can make knowledge in this sense” isn’t all we’re trying to produce, but it’s part of it), and is part of what I would like to see us strengthen over the coming years.

To briefly list our situation with respect to whether we are accomplishing #2 (according to me):

  • There are in fact a good number of AI safety scientists in particular who seem to me to produce knowledge of this type, and who give CFAR some degree of credit for their present tendency to do this.
  • On a milder level, while CFAR work
... (read more)

I think a crisp summary here is: CFAR is in the business of helping create scientists, more than the business of doing science. Some of the things it makes sense to do to help create scientists look vaguely science-ish, but others don't. And this sometimes causes people to worry (understandably, I think) that CFAR isn't enthused about science, or doesn't understand its value.

But if you're looking to improve a given culture, one natural move is to explore that culture's blindspots. And I think exploring those blindspots is often not going to look like an activity typical of that culture.

An example: there's a particular bug I encounter extremely often at AIRCS workshops, but rarely at other workshops. I don't yet feel like I have a great model of it, but it has something to do with not fully understanding how words have referents at different levels of abstraction. It's the sort of confusion that I think reading A Human's Guide to Words often resolves in people, and which results in people asking questions like:

  • "Should I replace [my core goal x] with [this list of "ethical" goals I recently heard about]?"
  • "Why is the fact that I have a goal a good reason to optimize for it?"
  • "Are p
... (read more)

I felt a "click" in my brain reading this comment, like an old "something feels off, but I'm not sure what" feeling about rationality techniques finally resolving itself.

If this comment were a post, and I were in the curating-posts business, I'd curate it. The demystified concrete examples of the mental motion "use a tool from an unsciencey field to help debug scientists" are super helpful.

Just want to second that I think this comment is particularly important. There's a particular bug where I can get innoculated to a whole class of useful rationality interventions that don't match my smell for "rationality intervention", but the whole reason they're a blindspot in the first place is because of that smell... or something.

I feel like this comment should perhaps be an AIRCS class -- not on meta-ethics, but on 'how to think about what doing debugging your brain is, if your usual ontology is "some activities are object-level engineering, some activities are object-level science, and everything else is bullshit or recreation"'. (With meta-ethics addressed in passing as a concrete example.)

(To be clear the above is an account of why I personally feel excited about CFAR having investigated circling. I think this also reasonably describes the motivations of many staff, and of CFAR's behavior as an institution. But CFAR struggles with communicating research intuitions, too; I think in this case these intuitions did not propagate fully among our staff, and as a result that we did employ a few people for a while whose primary interest in circling seemed to me to be more like "for its own sake," and who sometimes discussed it in ways which felt epistemically unhealthy to me. I think people correctly picked up on this as worrying, and I don't want to suggest that didn't happen; just that there is, I think, a sensible reason why CFAR as an institution tends to investigate local blindspots by searching for non-locals with a patch, thereby alarming locals about our epistemic allegiance).

Thanks, that was really helpful. I continue to have a sense of disagreement that this is the right way to do things, so I’ll try to point to some of that. Unfortunately my comment here is not super focused, though I am just trying to say a single thing.

I recently wrote down a bunch of my thoughts about evaluating MIRI, and I realised that I think MIRI has gone through alternating phases of internal concentration and external explanation, in a way that feels quite healthy to me.

Here is what I said:

In the last 2-5 years I endorsed donating to MIRI (and still do), and my reasoning back then was always of the type "I don't understand their technical research, but I have read a substantial amount of the philosophy and worldview that was used to successfully pluck that problem out of the space of things to work on, and think it is deeply coherent and sensible and it's been surprisingly successful in figuring out AI is an x-risk, and I expect to find it is doing very sensible things in places I understand less well." Then, about a year ago, MIRI published the Embdded Agency sequence, and for the first time I thought "Oh, now I feel like I have an understanding of what the technical resear

... (read more)

Ben to check, before I respond—would a fair summary of your position be, "CFAR should write more in public, e.g. on LessWrong, so that A) it can have better feedback loops, and B) more people can benefit from its ideas?"

Philosophy strikes me as, on the whole, an unusually unproductive field full of people with highly questionable epistemics.

This is kind of tangential, but I wrote Some Thoughts on Metaphilosophy in part to explain why we shouldn't expect philosophy to be as productive as other fields. I do think it can probably be made more productive, by improving people's epistemics, their incentives for working on the most important problems, etc., but the same can be said for lots of other fields.

I certainly don’t want to turn the engineers into philosophers

Not sure if you're saying that you personally don't have an interest in doing this, or that it's a bad idea in general, but if the latter, see Counterintuitive Comparative Advantage.

I have an interest in making certain parts of philosophy more productive, and in helping some alignment engineers gain some specific philosophical skills. I just meant I'm not in general excited about making the average AIRCS participant's epistemics more like that of the average professional philosopher.

I was preparing to write a reply to the effect of “this is the most useful comment about what CFAR is doing and why that’s been posted on this thread yet” (it might still be, even)—but then I got to the part where your explanation takes a very odd sort of leap.

But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds.

It’s entirely unclear to me what this means, or why it is necessary / desirable. (Also, it seems like you’re using the term ‘extrospection’ in a quite unusual way; a quick search turns up no hits for anything like the definition you just gave. What’s up with that?)

This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally.

There… seems to be quite a substantial line of reasoning hidden here, but I can’t guess what it is. Could you elaborate?

So we looked to specialists in extraspection for a patch.

Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating ga

... (read more)
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)

I'm speaking for myself here, not any institutional view at CFAR.

When I'm looking at maybe-experts, woo-y or otherwise, one of the main things that I'm looking at is the nature and quality of their feedback loops.

When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason "well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct." This doesn't seem that far off from what Circling is. (For instance, "I have a story that you're feeling defensive" -> "I don't feel defensive, so much as righteous. And...There's a flowering of heat in my belly.")

Circling does not seem like a perfect training regime, to my naive sensors, but if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that they would get increasingly skilled along a particular axis.

This makes it se... (read more)

When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason “well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct.” This doesn’t seem that far off from what Circling is. (For instance, “I have a story that you’re feeling defensive” → “I don’t feel defensive, so much as righteous. And...There’s a flowering of heat in my belly.”)

Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.

(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)

Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address. Relatedly, you said:

This makes it seem worthwhile training with

... (read more)

I'm going to make a general point first, and then respond to some of your specific objections.

General point:

One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.

But there's a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.

And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)

Specific points:

Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.

On the face of it, I wouldn't assume that it is reliable, but I don't have that strong a reason to assume that i... (read more)

Some sampling of things that I'm currently investigating / interested in (mostly not for CFAR), and sources that I'm using:

  • Power and propaganda
    • reading the Dictator's Handbook and some of the authors' other work.
    • reading Kissinger's books
    • rereading Samo's draft
    • some "evil literature" (an example of which is "things Brent wrote")
    • thinking and writing
  • Disagreement resolution and conversational mediation
    • I'm currently looking into some NVC materials
    • lots and lots of experimentation and iteration
  • Focusing, articulation, and aversion processing
    • Mostly iteration with lots of notes.
    • Things like PJ EBY's excellent ebook.
    • Reading other materials from the Focusing institute, etc.
  • Ego and what to do about it
    • Byron Katie's The Work (I'm familiar with this from years ago, it has an epistemic core (one key question is "Is this true?"), and PJ EBY mentioned using this process with clients.)
    • I might check out Eckhart Tolle's work again (which I read as a teenager)
  • Learning
    • Mostly iteration as I learn things on the object level, right now, but I've read a lot on deliberate practice, and study methodology, as well as learned g
... (read more)
6Howie Lempel
Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring. FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to? The best links I could find quickly were: * You, Version 2.0. * A Minute to Unlimit You
9Eli Tyre
A Minute to Unlimit You
I think I also damaged something psychologically, which took 6 months to repair.

I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.

I expect, though, that this is too sensitive/personal so please feel free to ignore.

7Eli Tyre
It's not sensitive so much has context-heavy, and I don't think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people's experiences of things like Circling better.
8Eli Tyre
Oh and as a side note, I have twice in my life had a short introductory conversation with a person, noticed that something unusual or interesting was happening (but not having any idea what), and then finding out subsequently that the person I was talking with had done a lot of circling. The first person was Pete, who I had a conversation with shortly after EAG 2015, before he came to work for CFAR. The other was an HR person at a tech company that I was cajoled into interviewing at, despite not really having any relevant skills. I would be hard pressed to say exactly what was interesting about those conversations: something like "the way they were asking questions was...something. Probing? Intentional? Alive?" Those words really don't capture it, but whatever was happening I had a detector that pinged "something about this situation unusual."
7Eli Tyre
Coming back to this, I think I would describe it as "they seemed like they were actually paying attention", which was so unusual as to be noteworthy.

Said I appreciate your point that I used the term "extrospection" in a non-standard way—I think you're right. The way I've heard it used, which is probably idiosyncratic local jargon, is to reference the theory of mind analog of introspection: "feeling, yourself, something of what the person you're talking with is feeling." You obviously can't do this perfectly, but I think many people find that e.g. it's easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about "mirror neurons," to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.

Similarly, I think it's often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one's own brain. Personally, I've found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people's vague, sub-verbal curiosities and intuitions about e.g. "which questions are most worth asking."

Circlers don't generally use this skill for research. But it is the primary skill, I think, that circling is designed to train, and my impression is that many circlers have become relatively excellent at it as a result.

7Said Achmiz
Hmm. I see, thanks. Now, you say “You obviously can’t do this perfectly”, but it seems to me a dubious proposition even to suggest that anyone (to a first approximation) can do this at all. Even introspection is famously unreliable; the impression I have is that many people think that they can do the thing that you call ‘extrospection’[1], but in fact they can do no such thing, and are deluding themselves. Perhaps there are exceptions—but however uncommon you might intuitively think such exceptions are, they are (it seems to me) probably a couple of orders of magnitude less common than that. Do you have any data (other than personal impressions, etc.) that would show or even suggest that this has any practical effect? (Perhaps, examples / case studies?) ---------------------------------------- 1. By the way, it seems to me like coming up with a new term for this would be useful, on account of the aforementioned namespace collision. ↩︎
8Adam Scholl
Thanks for spelling this out. My guess is that there are some semi-deep cruxes here, and that they would take more time to resolve than I have available to allocate at the moment. If Eli someday writes that post about the Nisbett and Wilson paper, that might be a good time to dive in further.
4ChristianKl
To do good UX you need to understand the mental models that your users have of your software. You can do that by doing a bunch of explicit A/B tests or you can do that by doing skilled user interviews. A person who doesn't do skilled user interviews will project a lot of their own mental models of how the software is supposed to work on the users that might have other mental models. There are a lot of things about how humans relate to the world around them, that they normally don't share with other people. People with a decent amount of self-awareness know how they reason, but they don't know how other people reason at the same level. Circling is about creating an environment where things can be shared that normally aren't. While it would be theoretically possible that people lie, it feels good to share about one's intimate experience in a safe environment and be understood. At one LWCW where I lead two circles there was a person who was in both and who afterwards said "I thought I was the only person who does X in two cases where I now know that other people also do X".
4Said Achmiz
Do you claim that people who have experience with Circling, are better at UX design? I would like some evidence for this claim, if so.
4ChristianKl
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling. As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It's not as relevant to where you place your buttons but it's very relevant to designing mental intervention in the style that CFAR does. Evidence is great, but we have little controlled studies of Circling.
2Said Achmiz
This is not an interesting claim. Ok, it’s ‘very similar’. And what of it? What follows from this similarity? What can we expect to be the case, given this? Does skill at Circling transfer to skill at conducting user interviews? How, precisely? What specific things do you expect we will observe? So… we don’t have any evidence for any of these claims, in other words? I don’t think I quite understand what you’re saying, here (perhaps due to a typo or two). What does the term ‘UX’ even mean, as you are using it? What does “designing mental intervention” have to do with UX?
3Matt Goldenberg
Not a CFAR staff member, but particularly interested in this comment. One way to frame this would be getting really good at learning tacit knowledge. One way would be to interact one with them, notice "hey, this person is really good at this" and then inquire as to how they got so good. This is my experience with seasoned authentic relaters. Another way would be to realize there's a hole in understanding related to intuitions, and then start ssearching around for "people who are claiming to be really good at understanding others' intuitions", this might lead you to running into someone as described above and then seeing if they are indeed good at the thing. Let's say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have: 1. A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind. 2. A newbie designer who hasn't done that. To me, that's the obvious use case for circling. I think there's also a bunch of obvious benefits on a group level to being able to relate to people better as well.
5Said Achmiz
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”? In fact, is there any reason to believe that being “really good at learning tacit knowledge” is a thing? Hmm, so in your experience, “seasoned authentic relaters” are really good at “simulating the felt senses of their conversational partners in their own minds”—is that right? If so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”? Can you say more about how you came to realize this? Well, my first step would be to stop wanting that, because it is not a sensible (or, perhaps, even coherent) thing to want. However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
7Matt Goldenberg
This requires some model of how intuitions work. One model I like to use is to think about "intuition" is like a felt sense or aesthetic that relates to hundreds of little associations you're picking up from a particular situation. If i'm quickly able to in my mind, get a sense for what it feels like for you (i.e get that same felt sense or aesthetic feel when looking at what you're looking at), and use circling like tools to be able to tease out which parts of the environment most contribute to that aesthetic feel, I can quickly create similar associations in my own mind and thus develop similar intuitions. Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case. I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I'm not sure how CFAR recognized it. I think this is a coherent stance if you think the general "learning intuitions" skill is impossible. But imagine if it weren't, would you agree that training it would be useful?
0Said Achmiz
Hmm. It’s possible that I don’t understand what you mean by “felt sense”. Do you have a link to any discussion of this term / concept? That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions? In other words, “my model of intuitions predicts X” is not a sufficient reason to believe X, unless those predictions have been borne out somehow, or the model validated empirically, or both. As always, some examples would be useful. It is not clear to me whether this would be evidence (in the strict Bayesian sense); is it more likely that the people from whom I have heard such things would make these claims if they were true than otherwise? I am genuinely unsure, but even if the answer is yes, the odds ratio is low; if evidence, it’s a very weak form thereof. Conversely, if this sort of thing is the only form of evidence put forth, then that itself is evidence against, as it were! Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized” would be “let’s try to understand intuition, via circling etc.” rather than “let’s develop intuitions, via deliberate practice, whereupon the results will speak for themselves, and this will also lead to improved understanding”. (Corollary question: have the efforts made toward understanding intuitions yielded an improved emphasis on deliberate practice, and have the results thereof been positive and obvious?) Indeed, I would, but notice that what you’re asking is different than what you asked before. In your earlier comment, you asked whether I would find it useful (in the hypothetical “newbie designer
6Matt Goldenberg
The wikipedia article for Gendlin's focusing has a section trying to describe felt sense, taking out the specific part about "the body", the first part says: " Gendlin gave the name "felt sense" to the unclear, pre-verbal sense of "something"—the inner knowledge or awareness that has not been consciously thought or verbalized", Which is fairly close to my use of it here. One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions. I do believe CFAR at one point was teaching deliberate practice and calling it "turbocharged training". However, if one is really interested in intiution and thinks its' useful, the next obvious step is to ask "ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?" Good catch, this assumes that my simplified model of how intuitions work is at least partly correct. If the felt sense you get from a particular situation doesn't relate to intuition, or if its' impossible for one human being to get better at feeling what another is feeling, than these are not equivalent. I happen to think both are true.
7Said Achmiz
I see, thanks. Well, my question stands. That is a prediction, sure (if a vague one), but now how do we test it? What concrete observations would we expect, and which are excluded, etc.? What has actually been observed? I’m talking specifics, now; data or case studies—but in any case very concrete evidence, not generalities! Yes… perhaps this is true. Yet in this case, we would expect to continue to use the available instruments (however blunt they may be) until such time as sharper tools are (a) available, and (b) have been firmly established as being more effective than the blunt ones. But it seems to me like neither (a) (if I’m reading your “at one point” comment correctly), nor (b), is the case here? Really, what I don’t think I’ve seen, in this discussion, is any of what I, in a previous comment, referred to as “the cake”. This continues to trouble me!
5Matt Goldenberg
I suspect the CFARians have more delicious cake for you, as I haven't put that much time into circling, and the related connection skills I worked on more than a decade ago and have atrophied since. Things I remember: * much quicker connection with people * there was a few things like exercise that I wasn't passionate about but wanted to be. After talking with people who were passionate I was able to become passionate myself for those things * I was able to more quickly learn social cognitive strategies by interacting with others who had them.
4philh
To suggest something more concrete... would you predict that if an X-ist wanted to pass a Y-ist's ITT, they would have more success if the two of them sat down to circle beforehand? Relative to doing nothing, and/or relative to other possible interventions like discussing X vs Y? For values of X and Y like Democrat/Republican, yay-SJ/boo-SJ, cat person/dog person, MIRI's approach to AI/Paul Christiano's approach? It seems to me that (roughly speaking) if circling was more successful than other interventions, or successful on a wider range of topics, that would validate its utility. Said, do you agree?
2Matt Goldenberg
Yes, although I expect the utility of circling over other methods to be dependent on the degree to which the ITT is based on intuitions.
6Ben Pace
I always think of 'felt sense' as, not just pre-verbal intuitions, but intuitions associated with physical sensations, be they in my head, shoulders, stomach, etc.
5Matt Goldenberg
I think that Gendlin thinks all pre-verbal intuitions are represented with physical sensations. I don't agree with him but still use the felt-sense language in these parts because rationalists seem to know what I'm talking about.
5Adam Scholl
Yeah, same; I think this term has experienced some semantic drift, which is confusing. I meant to refer to pre-verbal intuitions in general, not just ones accompanied by physical sensation.
2Matt Goldenberg
Also in particular - felt sense refers to the qualia related to intuitions, rather than the intuitions themselves.
6Adam Scholl
(Unsure, but I'm suspicious that the distinction between these two things might not be clear).
2Matt Goldenberg
Yes, I think there's a distinction between the semantic content of "My intuition is that Design A is better than Design B" referring to the semantic content or how the intuition "caches out" in terms of decisions. This contrast with the felt sense, which always seems to refer to what the intuition is like "from the inside," for example a sense of unease when looking at Design A, and rightness when looking at Design B. I feel like using the word "intuition" can refer to both the latter and the former, whereas when I say "felt sense" it always refers to the latter.

"For example, we spent a bunch of time circling for a while"

Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.

CFAR does spend substantially less time circling now than it did a couple years ago, yeah. I think this is partly because Pete (who spent time learning about circling when he was younger, and hence found it especially easy to notice the lack of circling-type skill among rationalists, much as I spent time learning about philosophy when I was younger and hence found it especially easy to notice the lack of philosophy-type skill among AIRCS participants) left, and partly I think because many staff felt like their marginal skill returns from circling practice were decreasing, so they started focusing more on other things.

Whether CFAR staff (qua CFAR staff, as above) will help educate people who later themselves produce explicit knowledge in the manner valued by Gwern, Wei Dai, or Scott Alexander, and who wouldn’t have produced (as much of) that knowledge otherwise.

This seems like a good moment to publicly note that I probably would not have started writing my multi-agent sequence without having a) participated in CFAR's mentorship training b) had conversations with/about Val and his posts.

With regard to whether our staff has read the sequences: five have, and have been deeply shaped by them; two have read about a third, and two have read little. I do think it’s important that our staff read them, and we decided to run this experiment with sabbatical months next year in part to ensure our staff had time to do this over the coming year.

I honestly think, in retrospect, that the linchpin of early CFAR's standard of good shared epistemics was probably Critch.