by [anonymous]
7 min read

9

Levers, Emotions, and Lazy Evaluators: Post-CFAR 2

[This is a trio of topics following from the first post that all use the idea of ontologies in the mental sense as a bouncing off point. I examine why naming concepts can be helpful, listening to your emotions, and humans as lazy evaluators. I think this post may also be of interest to people here. Posts 3 and 4 are less so, so I'll probably skip those, unless someone expresses interest. Lastly, the below expressed views are my own and don’t reflect CFAR’s in any way.]


Levers:

When I was at the CFAR workshop, someone mentioned that something like 90% of the curriculum was just making up fancy new names for things they already sort of did. This got some laughs, but I think it’s worth exploring why even just naming things can be powerful.


Our minds do lots of things; they carry many thoughts, and we can recall many memories. Some of these phenomena may be more helpful for our goals, and we may want to name them.


When we name a phenomenon, like focusing, we’re essentially drawing a boundary around the thing, highlighting attention on it. We’ve made it conceptually discrete. This transformation, in turn, allows us to more concretely identify which things among the sea of our mental activity correspond to Focusing.


Focusing can then become a concept that floats in our understanding of things our minds can do. We’ve taken a mental action and packaged it into a “thing”. This can be especially helpful if we’ve identified a phenomena that consists of several steps which usually aren’t found together.


By drawing certain patterns around a thing with a name, we can hopefully help others recognize them and perhaps do the same for other mental motions, which seems to be one more way that we find new rationality techniques.


This then means that we’ve created a new action that is explicitly available to our ontology. This notion of “actions I can take” is what I think forms the idea of levers in our mind. When CFAR teaches a rationality technique, the technique itself seems to be pointing at a sequence of things that happen in our brain. Last post, I mentioned that I think CFAR techniques upgrade people’s mindsets by changing their sense of what is possible.


I think that levers are a core part of this because they give us the feeling of, “Oh wow! That thing I sometimes do has a name! Now I can refer to it and think about it in a much nicer way. I can call it ‘focusing’, rather than ‘that thing I sometimes do when I try to figure out why I’m feeling sad that involves looking into myself’.”


For example, once you understand that a large part of habituation is simply "if-then" loops (ala TAPs, aka Trigger Action Plans), you’ve now not only understood what it means to learn something as a habit, but you’ve internalized the very concept of habituation itself. You’ve gone one meta-level up, and you can now reason about this abstract mental process in a far more explicit way.


Names haves power in the same way that abstraction barriers have power in a programming language—they change how you think about the phenomena itself, and this in turn can affect your behavior.  

 

Emotions:

CFAR teaches a class called “Understanding Shoulds”, which is about seeing your “shoulds”, the parts of yourself that feel like obligations, as data about things you might care about. This is a little different from Nate Soares’s Replacing Guilt series, which tries to move past guilt-based motivation.


In further conversations with staff, I’ve seen the even deeper view that all emotions should be considered information.


The basic premise seems to be based off the understanding that different parts of us may need different things to function. Our conscious understanding of our own needs may sometimes be limited. Thus, our implicit emotions (and other S1 processes) can serve as a way to inform ourselves about what we’re missing.


In this way, all emotions seem channels where information can be passed on from implicit parts of you to the forefront of “meta-you”. This idea of “emotions as a data trove” is yet another ontology that produces different rationality techniques, as it’s operating on, once again, a mental model that is built out of a different type of abstraction.


Many of the skills based on this ontology focus on communication between different pieces of the self.


I’m very sympathetic to this viewpoint, as it form the basis of the Internal Double Crux (IDC) technique, one of my favorite CFAR skills. In short, IDC assumes that akrasia-esque problems are caused by a disagreement between different parts of you, some of which might be in the implicit parts of your brain.


By “disagreement”, I mean that some part of you endorses an action for some well-meaning reasons, but some other part of you is against the action and also has justifications. To resolve the problem, IDC has us “dialogue” between the conflicting parts of ourselves, treating both sides as valid. If done right, without “rigging” the dialogue to bias one side, IDC can be a powerful way to source internal motivation for our tasks.


While I do seem to do some communication between my emotions, I haven’t fully integrated them as internal advisors in the IFS sense. I’m not ready to adopt a worldview that might potentially hand over executive control to all the parts of me. Meta-me still deems some of my implicit desires as “foolish”, like the part of me that craves video games, for example. In order to avoid slippery slopes, I have a blanket precommitment on certain things in life.


For the meantime, I’m fine sticking with these precommitments. The modern world is filled with superstimuli, from milkshakes to insight porn (and the normal kind) to mobile games, that can hijack our well-meaning reward systems.


Lastly, I believe that without certain mental prerequisites, some ontologies can be actively harmful. Nate’s Resolving Guilt series can leave people without additional motivation for their actions; guilt can be a useful motivator. Similarly, Nihilism is another example of an ontology that can be crippling unless paired with ideas like humanism.

 

Lazy Evaluators:

In In Defense of the Obvious, I gave a practical argument as to why obvious advice was very good. I brought this point up up several times during the workshop, and people seemed to like the point.


While that essay focused on listening to obvious advice, there appears to be a similar thing where merely asking someone, “Did you do all the obvious things?” will often uncover helpful solutions they have yet to do.

 

My current hypothesis for this (apart from “humans are programs that wrote themselves on computers made of meat”, which is a great workshop quote) is that people tend to be lazy evaluators. In programming, lazy evaluation is a way of solving for the value of expressions at the last minute, not until the answers are absolutely needed.


It seems like something similar happens in people’s heads, where we simply don’t ask ourselves questions like “What are multiple ways I could accomplish this?” or “Do actually I want to do this thing?” until we need to…Except that most of the time, we never need to—Life putters on, whether or not we’re winning at it.


I think this is part of what makes “pair debugging”, a CFAR activity where a group of people try to help one person with their “bugs”, effective. When we have someone else taking an outside view asking us these questions, it may even be the first time we see these questions ourselves.


Therefore, it looks like a helpful skill is to constantly ask ourselves questions and cultivate a sense of curiosity about how things are. Anna Salamon refers to this skill of “boggling”. I think boggling can help with both counteracting lazy evaluation and actually doing obvious actions.


Looking at why obvious advice is obvious, like “What the heck does ‘obvious’ even mean?” can help break the immediate dismissive veneer our brain puts on obvious information.


EX: “If I want to learn more about coding, it probably makes sense to ask some coder friends what good resources are.”


“Nah, that’s so obvious; I should instead just stick to this abstruse book that basically no one’s heard of—wait, I just rejected something that felt obvious.”


“Huh…I wonder why that thought felt obvious…what does it even mean for something to be dubbed ‘obvious’?”


“Well…obvious thoughts seem to have a generally ‘self-evident’ tag on them. If they aren’t outright tautological or circularly defined, then there’s a sense where the obvious things seems to be the shortest paths to the goal. Like, I could fold my clothes or I could build a Rube Goldberg machine to fold my clothes. But the first option seems so much more ‘obvious’…”


“Aside from that, there also seems to be a sense where if I search my brain for ‘obvious’ things, I’m using a ‘faster’ mode of thinking (ala System 1). Also, aside from favoring simpler solutions, also seems to be influenced by social norms (what do people ‘typically’ do). And my ‘obvious action generator’ seems to also be built off my understanding of the world, like, I’m thinking about things in terms of causal chains that actually exist in the world. As in, when I’m thinking about ‘obvious’ ways to get a job, for instance, I’m thinking about actions I could take in the real world that might plausibly actually get me there…”


“Whoa…that means that obvious advice is so much more than some sort of self-evident tag. There’s a huge amount of information that’s being compressed when I look at it from the surface…’Obvious’ really means something like ‘that which my brain quickly dismisses because it is simple, complies with social norms, and/or runs off my internal model of how the universe works.”


The goal is to reduce the sort of “acclimation” that happens with obvious advice by peering deeper into it. Ideally, if you’re boggling at your own actions, you can force yourself to evaluate earlier. Otherwise, it can hopefully at least make obvious advice more appealing.


I’ll end with a quote of mine from the workshop:


“You still yet fail to grasp the weight of the Obvious.”


New Comment
11 comments, sorted by Click to highlight new comments since:

There is a problem where I say "Your hypothesis is backed by the evidence," when your entirely verbal theory is probably amenable to many interpretations and it's not clear how many virtue points you should get. But, I wanted to share some things from the literature that support your points about using feelings as information and avoiding miserliness.

First, there is something that's actually just called 'feelings-as-information theory', and has to do with how we, surprise, use feelings as sources of information. 'Feelings' is meant to be a more general term than 'emotions.' Some examples of feelings that happen to be classified as non-emotion feelings in this model are cognitive feelings, like surprise, or ease-of-processing/fluency experiences; moods, which are longer-term than emotions and usually involve no causal attribution; and bodily sensations, like contraction of the zygomaticus major muscles. In particular, processing fluency is used intuitively and ubiquitously as a source of information, and that's the hot topic in that small part of cognitive science right now. I have an entire book on that one feeling. I did write about this a little bit on LW, like in Availability Heuristic Considered Ambiguous, which argues that Kahneman and Tversky's availability heuristic can be fruitfully interpreted as a statement about the use of retrieval fluency as a source of information; and Attempts to Debias Hindsight Backfire!, which is about experiments that manipulate fluency experiences to affect people's retroactive likelihood judgments. The idea of 'feelings as information' looks central to the Art.

There is also a small literature on hypothesis generation. See the section 'Hypothesis Generation and Hypothesis Evaluation' of this paper for a good review of everything we know about hypothesis generation. Hardly inspiring, I know. The evidence indicates that humans generate relatively few hypotheses, or we may also write, humans have impoverished hypothesis sets. Also in this paper, I saw studies that compare hypothesis generation between individuals and groups of various sizes. You're right that groups typically generate more hypotheses than individuals. They also tried comparing 'natural' and 'synthetic' groups, natural groups are what you think; the hypothesis sets of synthetic groups are formed from the union of many individual, non-group hypothesis sets. It turns out that synthetic groups do a little better. Social interaction somehow reduces the number of alternatives that a group considers relative to what the sum of their considerations would be if they were not a group.

Also, about your planning fallacy primer, I think the memory bias account has a lot more going for it than a random individual might infer from the brevity of its discussion.

[-][anonymous]20

Hey Gram,

Thanks for the additional information!

I am assuming the first point is about this post and the second two are about the planning primer?

The feelings-as-information literature is new to me, and most of what I wrote here is from conversations w/ folks at CFAR. (Who, by the way, would probably be interested in seeing those links as well.)

I'll freely admit that the decision making part in groups was the weakest part of my planning primer. I'm not very sure on the data, so your additional info on improved group hypothesis generation is pretty cool.

There are definitely several papers on memory bias affecting decisions, although I'm unsure if we're talking about the same thing here. What I want to say is something like "improperly recalling how long things took in the past is a problem that can bias predictions we make" and this phenomena has been studied several times.

But there is also a separate thing where "in observed studies of people planning, very few of them seem to even use their memories, in the sense of recalling past information, to create a reference class and use it to help them with their estimates for their plans", which might also be what you're referring to.

I am assuming the first point is about this post and the second two are about the planning primer?

The first two are about this article and the third is about the planning fallacy primer. I mentioned hypothesis generation because you talked about 'pair debugging' and asking people to state the obvious solutions to a problem as ways to increase the number of hypotheses that are generated, and it pattern matched to what I'd read about hypothesis generation.

There are definitely several papers on memory bias affecting decisions, although I'm unsure if we're talking about the same thing here. What I want to say is something like "improperly recalling how long things took in the past is a problem that can bias predictions we make" and this phenomena has been studied several times.

I'm definitely talking about this as opposed to the other thing. MINERVA-DM is a good example of this class of hypothesis in the realm of likelihood judgment. Hilbert (2012) is an information-theoretic approach to memory bias in likelihood judgment.

I'm just saying that it looks like there's a lot of fruit to be picked in memory theory and not many people are talking about it.

[-][anonymous]20

Okay, gotcha. Thanks for the clarification on the points.

I admit I don't quite understand what MINERVA-DM is...I glanced at the paper briefly and it appears to be a...theoretical framework for making decisions which is shown to exhibit similar biases to human thought? (With cells and rows and ones?)

I'm definitely not strong in this domain; any chance you could summarize?

I admit I don't quite understand what MINERVA-DM is...I glanced at the paper briefly and it appears to be a...theoretical framework for making decisions which is shown to exhibit similar biases to human thought? (With cells and rows and ones?)

I can't describe it too much better than that. The framework is meant to be descriptive as opposed to normative.

A complete description of MINERVA-DM would involve some simple math, but I can try to describe it in words. The rows of numbers you saw are vectors. We take a vector that represents an observation, called a probe, along with all vectors in episodic memory, which are called traces, and by evaluating the similarity of the probe to each trace and averaging these similarities, we obtain a number that represents a global familiarity signal. By assuming that people use this familiarity signal as the basis of their likelihood judgments, we can simulate some of the results found in the field of likelihood judgment.

I suspect that with a bit of work, one could even use MINERVA-DM to simulate retrospective and prospective judgments of task duration, and thus, planning fallacy.

[-][anonymous]00

Huh, okay, cool. Thanks for the additional info!

Thingifying is like building the cell wall that lets you call a thing a thing. But the wall needs receptor sites if you ever want it to interact with other things. Many many self help techniques do a bunch of things sort of well, so they never present themselves as the killer product for a particular problem. A vague sense that your life would be better if you did some nebulous amount 'more' of some technique doesn't issue a strong buy reaction from S1.

I find the sorts of things that do fire often are a couple steps removed from specific techniques and are often more like queries that could result in me going and using a technique. For example: 'VoI/ceiling of value between choices?' fires all the time and activates napkin math, but this doesn't feel from the inside like I am activating the napkin math technique. It feels more like receptors are predicated on perception. I had to notice that I was doing a search for candidate choices.

I don't think of this as boggling, closer to the skill that is practiced in frame-by-frame analysis. Activating that particular skill feels way less valuable than just having a slightly higher baseline affordance for noticing frames.

[-][anonymous]00

Huh, okay.

I will say that I think my typical everyday rationality looks much more like the thing you mentioned; if I've invested time but the thing isn't panning out, (as an example) then something along the lines of "hey, sunk costs are a thing, let's get out of here" will fire.

But I do think that there's a time and place for the sort of more explicit reasoning that boggling entails.

(Unsure if we're talking about the same thing. Feel free to re-orient me if I've gone off on a tangent)

What I mean is that the skills 'work' when practicing them leads to them bleeding out into the world. And that this mostly looks liess like 'aha, an opportunity to use skill X' and more like you just naturally think a bit more in terms of how skill X views the world than before.

Eg: supply and demand is less an explicit thing you apply (unless the situation is complex) and more just the way you see the world when you level up the economist lens.

[-][anonymous]20

Ah, cool. This I think I agree with (skills in much more fluid contexts vs needing to explicitly call on them. maybe a passive vs active skill comparison from RPGs?)

(lenses/ontologies/viewpoints/hats/perceptual habits)