CFAR Workshop Review: February 2017

by lifelonglearner11 min read28th Feb 201716 comments

13

Center for Applied Rationality (CFAR)Rationality
Personal Blog

[A somewhat extensive review of a recent CFAR workshop, with recommendations at the end for those interested in attending one.]

I recently mentored at a CFAR workshop, and this is a review of the actual experience. In broad strokes, this review will cover the physical experience (atmosphere, living, eating, etc.), classes (which ones were good, which ones weren’t), and recommendations (regrets, suggestions, ways to optimize your experience). I’m not officially affiliated with CFAR, and this review represents my own thoughts only.

A little about me: my name is Owen, and I’m here in the Bay Area. This was actually my first real workshop, but I’ve had a fair amount of exposure to CFAR materials from EuroSPARC, private conversations, and LessWrong. So do keep in mind that I’m someone who came into the workshop with a rationalist’s eye.

I’m also happy to answer any questions people might have about the workshop. (Via PM or in the comments below.)


Physical Experience:

Sleeping / Food / Living:

(This section is venue-dependent, so keep that in mind.)

Despite the hefty $3000 plus price tag, the workshop accommodations aren’t exactly plush. You get a bed, and that’s about it. In my workshop, there were always free bathrooms, so that part wasn’t a problem.

There was always enough food at meals, and my impression was that dietary restrictions were handled well. For example, one staff member went out and bought someone lunch when one meal didn’t work. Other than that, there’s ample snacks between meals, usually a mix of chips, fruits, and chocolate. Also, hot tea and a surprisingly wide variety of drinks.

Atmosphere / Social:

(The participants I worked with were perhaps not representative of the general “CFAR participant”, so also take caution here.)

People generally seemed excited and engaged. Given that everyone hopefully voluntarily decided to show up, this was perhaps to be expected. Anyway, there’s a really low amount of friction when it comes to joining and exiting conversations. By that, I mean it felt very easy, socially speaking, to just randomly join a conversation. Staff and participants all seemed quite approachable for chatting.

I don’t have the actual participant stats, but my impression is that a good amount of people came from quantitative (math/CS) backgrounds, so there were discussions on more technical things, too. It also seemed like a majority of people were familiar with rationality or EA prior to coming to the workshop.

There were a few people for whom the material didn’t seem to “resonate” well, but the majority people seemed to be “with the program”.

Class Schedule:

(The schedule and classes are also in a state of flux, so bear that in mind too.)

Classes start at around 9:30 am in the morning and end at about 9:00 pm at night. In between, there are 20 minute breaks between every hour of classes. Lunch is about 90 minutes, while dinner is around 60 minutes.

Most of the actual classes were a little under 60 minutes, except for the flash classes, which were only about 20 minutes. Some classes had extended periods for practicing the techniques.

You’re put into a group of around 8 people, which switches every day, that you go to classes with. So there’s a few rotating classes that are happening, where you might go to them in a different order.

 

Classes Whose Content I Enjoyed:

As I was already familiar with most of the below material, this reflects more a general sense of classes which I think are useful, rather than ones which were taught exceptionally well at the workshop.

TAPs: Kaj Sotala already has a great write-up of TAPs here, and I think that they’re a helpful way of building small-scale habits. I also think the “click-whirr” mindset TAPs are built off can be a helpful way to model minds. The most helpful TAP for me is the Quick Focusing TAP I mention about a quarter down the page here.

Pair Debugging: Pair Debugging is about having someone else help you work through a problem. I think this is explored to some extent in places like psychiatry (actually, I’m unsure about this) as well as close friendships, but I like how CFAR turned this into a more explicit social norm / general thing to do. When I do this, I often notice a lot of interesting inconsistencies, like when I give someone good-sounding advice—except that I myself don’t follow it.  

The Strategic Level: The Strategic Level is where you, after having made a mistake, ask yourself, “What sort of general principles would I have had noticed in order to not make a mistake of this class in the future?” This is opposed to merely saying “Well, that mistake was bad” (first level thinking) or “I won’t make that mistake again” (second level thinking). There were also some ideas about how the CFAR techniques can recurse upon themselves in interesting ways, like how you can use Murphyjitsu (middle of the page) on your ability to use Murphyjitsu. This was a flash class, and I would have liked it if we could have spent more time on these ideas.

Tutoring Wheel: Less a class and more a pedagogical activity, Tutoring Wheel was where everyone picked a specific rationality class to teach and then rotated, teaching others and being taught. I thought this was a really strong way to help people understand the techniques during the workshop.

Focusing / Internal Double Crux / Mundanification: All three of these classes address different things, but in my mind I thought they were similar in the sense of looking into yourself. Focusing is Gendlin’s self-directed therapy technique, where people try to look into themselves to get a “felt shift”. Internal Double Crux is about resolving internal disagreements, often between S1 and S2 (but not necessarily). Mundanification is about facing the truth, even when you flinch from it, via Litany of Gendlin-type things. This general class of techniques that deals with resolving internal feelings of “ugh” I find to be incredibly helpful, and may very well be the highest value thing I got out of the class curriculum.

 

Classes Whose Teaching/Content I Did Not Enjoy:

These were classes that I felt were not useful and/or not explained well. This differs from the above, because I let the actual teaching part color my opinions.

Taste / Shaping: I thought an earlier iteration of this class was clearer (when it was called Inner Dashboard). Here, I wasn’t exactly sure what the practical purpose of the class was, let alone what the general thing it was pointing at. To the best of my knowledge, Taste is about how we have subtle “yuck” and “yum” senses towards things, and there can be a way to reframe negative affects in a more positive way, like how “difficult” and “challenging” can be two sides of the same coin. Shaping is about…something. I’m really unclear about this one.

Pedagogical Content Knowledge (PCK): PCK is, I think, about how the process of teaching a skill differs from the process of learning it. And you need a good understanding of how a beginner is learning something, what that experience feels like, in order to teach it well. I get that part, but this class seemed removed from the other classes, and the activity we did (asking other people how they did math in their head) didn’t seem useful.

Flash Class Structure: I didn’t like the 20 minute “flash classes”. I felt like they were too quick to really give people ideas that stuck in their head. In general, I am in support of less classes and extended times to really practice the techniques, and I think having little to no flash classes would be good.

 

Suggestions for Future Classes: 

This is my personal opinion only. CFAR has iterated their classes over lots of workshops, so it’s safe to assume that they have reasons for choosing what they teach. Nevertheless, I’m going to be bold and suggest some improvements which I think could make things better.

Opening Session: CFAR starts off every workshop with a class called Opening Session that tries to get everyone in the right mindset for learning, with a few core principles. Because of limited time, they can’t include everything, but there were a few lessons I thought might have helped as the participants went forward:

In Defense of the Obvious: There’s a sense where a lot of what CFAR says might not be revolutionary, but it’s useful. I don’t blame them; much of what they do is draw boundaries around fairly-universal mental notions and draw attention to them. I think they could spend more time highlighting how obvious advice can still be practical.

Mental Habits are ProceduralRationality techniques feel like things you know, but it’s really about things you do. Focusing on this distinction could be very useful to make sure people see that actually practicing the skills is very important.

Record / Take Notes: I find it really hard to remember concrete takeaways if I don’t write them down. During the workshop, it seemed like maybe only about half of the people were taking notes. In general, I think it’s at least good to remind people to journal their insights at the end of the day, if they’re not taking notes at every class.

Turbocharging + Overlearning: Turbocharging is a theory in learning put forth by Valentine Smith which, briefly speaking, says that you get better at what you practice. Similarly, Overlearning is about using a skill excessively over a short period to get it ingrained. It feels like the two skills are based off similar ideas, but their connection to one another wasn’t emphasized. Also, they were several days apart; I think they could be taught closer together.

General Increased Cohesion: Similarly, I think that having additional discussion on how these techniques relate to one another be it through concept maps or some theorizing might be good to give people a more unified rationality toolkit.

 

Mental Updates / Concrete Takeaways:

This ended up being really long. If you’re interested, see my 5-part series on the topic here.

 

Suggestions / Recommendations:

This is a series of things that I would have liked to do (looking back) at the workshop, but that I didn’t manage to do at the time. If you’re considering going, this list may prove useful to you when you go. (You may want to consider bookmarking this.)

Write Things Down: Have a good idea? Write it down. Hear something cool? Write it down. Writing things down (or typing, voice recording, etc.) is all really important so you can remember it later! Really, make sure to record your insights!

Build Scaffolding: Whenever you have an opportunity to shape your future trajectory, take it. Whether this means sending yourself emails, setting up reminders, or just taking a 30 minute chunk to really practice a certain technique, I think it’s useful to capitalize on the unique workshop environment to, not just learn new things, but also just do things you otherwise probably “wouldn’t have had the time for”.

Record Things to Remember Them: Here’s a poster I made that has a bunch of suggestions:

reminder-poster
Do ALL The Things!

 

Don’t Be Afraid to Ask for Help: Everyone at the workshop, on some level, has self-growth as a goal. As such, it’s a really good idea to ask people for help. If you don’t understand something, feel weird for some reason, or have anything going on, don’t be afraid to use the people around you the fullest (if they’re available, of course).

Conclusion:

Of course, perhaps the biggest question is “Is the workshop worth the hefty price?”

Assuming you’re coming from a tech-based position (apologies to everyone else, I’m just doing a quick ballpark with what seems to be the most common place CFAR participants seem to come from), the average hourly wage is something like $40. At ~$4,000, the workshop would need to save you about 100 hours to break even.

If you want rigorous quantitative data, you may want to check out CFAR’s own study on their participants. I don’t think I’ve got a good picture of quantifying the sort of personal benefits, myself, so everything below is pretty qualitative.

Things that I do think CFAR provides:

1) A unique training / learning environment for certain types of rationality skills that would probably be hard to learn elsewhere. Several of these techniques, including TAPs, Resolve Cycles, and Focusing have become fairly ingrained in my daily life, and I believe they’ve increased my quality of life.

Learning rationality is the main point of the workshop, so the majority of the value probably comes out of learning these techniques. Also, though, CFAR gives you the space and time to start thinking about a lot of things you might have otherwise put off forever. (Granted, this can be achieved by other means, like just blocking out time every week for review, but I thought this counterfactual benefit was still probably good to mention.)

2) Connections to other like-minded people. As a Schelling point for rationality, you’ll meet people who share similar values / goals as you at a CFAR workshop. If you’re looking to make new friends or meet others, this is another benefit. (Although it does seem costly and inefficient if that’s your main prerogative.)

3) Upgraded mindset: As I wrote about here, I think that learning CFAR-type rationality can really level up the way you look at your brain, which seems to have some potential flow-through effects. The post explains it better, but in short, if you have not-so-good mental models, then CFAR could be a really good choice for boosting how you see how your mind works.

There are probably other things, but those are the main ones. I hope this helps inform your decision. CFAR is currently hosting a major sprint of workshops, so this would be a good time to sign up for one, if you've been considering attending.

13

16 comments, sorted by Highlighting new comments since Today at 5:39 PM
New Comment

Mundanification is about facing the truth, even when you flinch from it, via Litany of Gendlin-type things.

Can you talk a bit more about this? I'm only familiar with the Litany of Gendlin itself.

Sure!

So there seems to be this conceptual cluster of rationality techniques that revolve around facing the truth, even when it's hard to face. This seems especially useful for those icky situations where your beliefs have some sort of incentive to not correspond to reality.

Examples:

  • You don't want to clean out your fridge because if you had to look in there, then part of you feels like it would make the rotting food at the back more 'real'. (But in reality, your awareness of the food is independent of its existence, and if you don't clean it out, it'll only get worse.)

  • You don't want to get your homework done because it's boring/painful to think about, and if you don't do it, then you don't have to think about it, which basically means it's not really there. (But in reality, this only pushes it closer to the deadline.)

  • You plan to finish your project in 30 minutes even though it took you 1 hr last time, because part of you thinks that if you write down '1 hr', it'll really take you that long. But you really need it to be done in 30 minutes, so you write that down instead. (But in reality, you need to decouple your estimates from wishes to get well-calibrated. Your prediction is largely independent of your performance.)

And on and on. These sorts of problems often comprise ugh fields, feel painful to think about, and are often sources of aversion.

To debug these sorts of problems, there are several (in my opinion) conceptual variants of harnessing epistemological rationality. These techniques often focus on trying to get to the root of the aversion and also calibrate your gut-level senses with the idea that your belief about a matter doesn't actually control reality.

Mundanification is just another one of these variants that's about being able to peek into those dark "no, I must never look in here!" corners of your mind and trying to actually state the worst-case scenario (which is often black-boxed as a Terrible Thing that is Never Opened).

Mundanification is just another one of these variants that's about being able to peek into those dark "no, I must never look in here!" corners of your mind and trying to actually state the worst-case scenario (which is often black-boxed as a Terrible Thing that is Never Opened).

How does it work specifically? I can't see the technique posted anywhere.

During the workshop, it wasn't well fleshed out (it was a short "flash class"), so I'm afraid I don't have too many details.

Here are some pieces of the thing, though, and hopefully it helps point at the general idea. The class of techniques is about:

  • 1) Being able to notice when you feel aversive / scared / painful with regards to something in your head.

  • 2) Feel okay with looking into these areas, unpacking them, asking yourself the question of, "What is it, exactly, about this situation that's causing me distress?"

  • 3) Also, being able to explicate worst-case scenarios, being able to be okay with answering, in some detail, the question, "What's the worst thing that can really happen?"

Many rationalists have an icky situation where their beliefs have a certain particular incentive not to correspond to reality: when they consider such situations, they would prefer not to consider how much truth there is in the claim that they are better off avoiding such topics. For example, in each of the above claims, you imply that the ugh field is based completely on falsehood. In reality, however, there is a good deal of truth in it in each case:

"But in reality, your awareness of the food is independent of its existence." The badness of the food for you does partly depend on your awareness of it. There is plenty of food rotting in dumps all over the world, and this does not affect any of us. So the rotten food will indeed be worse for you in some ways if you clean the fridge.

"But in reality, this only pushes it closer to the deadline." Again, you find it boring and painful to work on your homework. If you push it very close to the deadline, but then work on it because you have to, you will minimize the time spent on it, thus minimizing your pain.

"Your prediction is largely independent of your performance." This is frequently just false; if you plan on 1 hour, you are likely to take 2 hours, while if you plan on 30 minutes again, you are likely to take 1 hour again.

I wonder if my examples may have just been bad. Do you agree about my general point about flinch-y type topics being hard to debug and Litany of Gendlin-style things to be useful for doing so?

EX:

In the food example, if you don't know about rotting food, it'll become more unpleasant to take out later on.

The homework example may actually be not as good. But note that if you do homework early, you save future You any more anguish thinking about how it's undone.

For the planning thing, I think that I disagree with you. The literature on planning has some minor studies showing that estimation time does in part slightly positively affect performance (hence my use of "largely"), but I think there are far more severe consequences that can arise when your predictions are miscalibrated. (e.g. making promises you can't keep, getting overloaded, etc.)

My general point is not that, all things considered, it is better in those particular cases to flinch away. I am saying that flinching has both costs and benefits, not only costs, and consequently there may be particular cases when you are better off flinching away.

Thank you for this review. Can you write something about "Resolve Cycles"? Oh! I found it on your blog, thx :)))

[This comment is no longer endorsed by its author]Reply

The Resolve Cycle is a CFAR technique where one sets a 5 minute timer and resolves to solve the problem in the allotted time.

-- https://mindlevelup.wordpress.com/2017/02/20/resolve-post-cfar-3/

Do you know how many people who participate in the CFAR focusing workshop got Focusing enough to fell a felt shift?

About 60%.

More specifically: At the February workshop, 65% of participants filled out the optional data collection handout at the end of the hour-long Focusing class. Of the participants who filled it out, 60% circled 6 or higher in response to the question Did you experience a "felt shift"? (0 = not at all, 10 = yes, definitely).

(This is Dan from CFAR.)

Another question on the subject of Focusing: is anyone able to point to any good online resources explaining what it is / how to try it / any theoretical background it has?

On the one hand, I'm fascinated by pre-verbal, 'embodied' aspects of thinking, and this 'felt sense' idea sounds well worth exploring. On the other hand, as with anything in the self-help-adjacent area there looks to be a lot of dubious stuff and people wanting to sell you things, and if anyone has already looked into this and can save me from wading through the rubbish I'd really appreciate it.

The official website is http://www.focusing.org/index.html . It contains a basic description of the technique and references to various resources.

A 2001 article suggests that around that time there were around 80 studies done on Focusing.

Picking up the skill simply by reading the explanation of the official website might require familiarity with the basic concepts that are involved that most people might not have. A few LW people found the Focusing audiobook helpful.

After having done a lot of research into teaching rationality CFAR seems to consider it to be a basic building stone that's worthwhile to teach.

I personally taught Focusing workshops at the last two European Community Weekends and I think two times to our LessWrong Dojo.

Thanks very much! Yes I wasn't really expecting to be able to pick up too much from an online explanation, but a bit of context is nice to decide whether to explore further. It sounds like the audiobook would be a good resource after that.

ChristianKl has it as a pet project, a few of us have read the book, the original audiobook is the best source apparently. It seems to have some legitimacy about it, essentially trying to give a way of S1 to communicate with S2.

There is nothing mysterious about it. You set up some conditions that should make it easier for the two systems to communicate through sensory perceptions. Thinking of a problem, thinking of a feeling of "broken" that represents that problem, like a knot. Then think of what it would take to fix that broken feeling, like untying the knot or unravelling the spaghetti. Then try to put that solution feeling into words for yourself.

except with some better direction and hints than two sentences, and a meditative environment.

Thanks for the extra description, that's helpful! I might give the audiobook a go then.