Figuring out "what would actually change your mind?" is among the more important rationality skills.
Being able to do change your mind at all is a huge project. But, being able to do it quickly is much more useful than being able to do it at all. Because, then you can interleave the question "what would change my mind?" inside other, more complex planning or thinking.
I find this much easier to learn "for real" when I'm thinking specifically about what would change my mind about my decisionmaking. Sure, I could change my belief about abstract philosophy, or physics, or politics... but most of that doesn't actually impact my behavior. There's no way for reality to "push back" if I'm wrong.
By contrast, each day, I make a lot of decisions. Some of those decisions are expensive – an hour or more of effort. If I'm spending an hour on something, it's often worth spending a minute or two asking "is this the best way to spend the hour?". I usually have at least one opportunity to practice finding notable cruxes.
Previously: "Fluent, Cruxy Predictions" and "Double Crux"
While this post stands alone, I wrote it as a followup to Fluent, Cruxy Predictions, where I argued it's valuable to be not merely "capable" but "fluent" in:
Figuring out what would actually change your decisions ("finding cruxes")
Operationalize it as an observable bet
Make a Fatebook prediction about it, so that whatever you decide, you can become more calibrated about your decisionmaking over time.
Each step there has at least some friction to overcome.
But most of the nearterm value comes from #1, and vague hints of #2. The extra effort to turn #2 into something you can grade on Fatebook only pays off longterm.
So, I've updated you probably should focus on Finding Cruxes, and handwavily operationalizing them just enough to help you think in-the-moment, before worrying too much about integrating Fatebook into your life.[1]
Finding Cruxes
I sadly don’t have a unified theory of finding cruxes – for any given set of decisions, the way I compare them is kind of idiosyncratic to that situation. But there is a knack to it.
Finding a crux isn’t about finding “a principled legible reason that looks good on paper.” Finding a crux is about finding something you could learn, which would actually change your decisionmaking.
It should make sense, when you have more time to think about it and articulate it. But much of the time, you don’t have much time to figure that out - you have a lot of implicit information and your intuitions and background beliefs. It would be too clunky to make all that legible every time you make a decision. You need to learn to understand your gut intuitions.
The two broad skills that go into cruxfinding in my experience are:
Getting better at asking yourself questions that
Knowing the "feel" of what it feels for something to be cruxy,
Helping Reality Punch You In the Face
If you're looking for "what would change my decisionmaking", one of the clearest signals you can get is imagining a possible outcome that makes you go "oh, fuck", that feels like a gut-punch. Something that would make you go “woah, I really should have done something different
Startups vs Nonprofits
I got the concept of "reality punch you in the face" from my colleague Jacob. He had recently joined Lightcone, but previously had founded a company selling covid research to other countries to help them set policy.
At Lightcone, we're trying to solve longterm confusing problems using longterm confusing methods and it's unclear if anything works and what our feedback loops are supposed to be.
At Jacob's covid research company, every day he would wake up and feel punched in the face. He'd find out a contractor hadn't gotten something done in time. Or that the agencies he was trying to sell to suddenly pulled out. If he didn't build a good product fast enough to sell it in a fast-changing-marketplace, his business would die. Plans had clear stakes.
His story put the fear of god in me. I was struck by how little I knew if anything Lightcone did was useful. Does building new LessWrong features help? Does recruiting more AI safety researchers? Does holding bespoke events at Lighthaven with particular vibes help?
I dunno man. I have no idea, really. I had a vision of me spending years working on various projects that seemed like they'd maybe help, and then one day waking up and finding out the AI had killed me and nothing I did had mattered I wouldn't wake up.
It seemed like most important projects in the world were kind of like this. And the naive solution of "get yourself some obvious feedback loops" was a trap too. The obvious things like "get more AI safety researchers" indeed have turned out, in my opinion, to have subtle problems.
This gave me a goal: Figure out how to help reality punch you in the face as soon as possible when you're working on longterm, confusing projects.
Life Philosophies
Another example: once upon a time 10 years ago, I was arguing with @habryka about how important it was to be empathetic. I made a few different arguments about why empathy was important. "I think it's not just nicer, it'll make you more productive, you'll understand your employees better and they'll feel listened to, etc."
Habryka said "so, you're predicting that Elon Musk would become better at his job if he invested in becoming more empathetic?". And I felt a sudden yawning maw in front of me, like "oh shit, my nice abstract worldview about how being empathetic was good is suddenly falsifiable." I didn't know the answer, but I could visualize my estimate of how good Elon currently was at empathy (not very), and then imagine him putting work into changing that. And then imagine what would happen when he went into work the next day.
(This was back in the day when more people's first association with Elon was "ambitious, autistic workaholic who made the rockets fly.")
I might still give it >50% odds that learning empathy would be net positive for him, but I wouldn't give it 9-to-1 odds. And I wouldn't bet very heavily that this would be a better use of his name than most of the other alternative skills he could develop.
Toy Problems
Gut-punchy-cruxes can work in small toy examples:
When I go to practice thinking on some real world domain, I might spend awhile thinking about all sorts of clever-feeling strategies and never get clear confirmation about what helped. But if I'm making a plan in a "Thinking Physics" or "Baba is You" puzzle, where I'm trying to get the answer right on the first try, reality gives me an unambiguous answer to whether my strategies helped or not. The first couple times I did such an exercise, I spent a ton of time doing all sorts of clever-feeling thinking and then got the brutal answer "none of that mattered, actually."
Later, I learned the skill of asking during such an exercise "is this strategy actually going to pay off?", and sometimes realize "oh, this can't possibly work because there's a constraint to the puzzle that my current plan doesn't address at all."
The skill of cruxfinding is finding the equivalent thing for murkier, long-time-horizon plans.
Step by Step
Step 1: Get alternatives
You first need to have some actual alternative plans to whatever you were planning to do anyway. This is a whole other complex skill & blogpost.
But, a prompt for now is:
"Assume you looked back and found out your current decision definitely wasn't going to work out. What would you do instead?". The alternatives doesn't have to make sense or be justifiable, it just has to be a true fact about you that you'd actually go do them.
(I try to aim for having at least 3 alternative plans, one pretty different from the others, so that I'm not locked into the initial frame. If I'm currently rabbitholing on a particular project at work, two alternative plans might be "use some other strategy to accomplish the project" and "switch to some other project, or go talk to my boss about what I should actually be doing, or take a break.")
Step 2: Ask yourself random-ass prompts.
Okay, how do you get yourself face-punched as fast as possible?
I don't have an elegant way to search for such cruxes, they are very context dependent.
My solution so far is to ask myself some random-ass questions and feel around for ones that either feel juicy/informative, or I feel scared to think about. I don't yet have a shortcut for "try a bunch of them out, and develop a feel for what questions are relevant in the moment."
Some example gut-punchy questions are:
if I knew for a fact nobody would use this project I'm working on, would I still do it?
if I knew this was going to turn out to be 10x as expensive as I'm vaguely imagining, would it be worth it? What about 100x?
what's the biggest payoff my current plan can possibly get?
Some juicy/informative types of questions tend to be:
what would have to go surprisingly/magically well, in order for this alternative plan to actually be better than my current one? (is there a less-magical version of it I can do?
what's the biggest payoff the alternative plans could possibly get?
what are some facts about the world that might totally change my approach to this problem, if they were true (or, false).
if there was information out there somewhere that could change my mind, where would I be most likely to find it? (talking to people? reading up on similar projects?)
Some less intense prompts but often useful are:
are either of these plans more time sensitive?
do either of these plans have noticeably more downside risk?
which plan has the shortest feedback cycle?
which is more of a bottleneck for future plans?
which would you pay more money to have done right now?
3. Try to find scalar units.
Last week I was building some infrastructure to help with a project to deal with something annoying. "Will it succeed at making the thing less annoying?" is both vague and absolute in a way that's hard to reason about. How often would I be annoyed? How much of a timesink is that annoyance?
If I drove the annoyance cost down to 0, will that enable to me do some tasks more often that I'd otherwise avoid because they're too ughy? How many times would I actually do that previously-annoying task? What's the most it highest the answer could be? What's the lowest?
...
Sometimes I'm doing something to make me happy, or make someone else happy. How happy will I/they be? How much would they pay for it? How many days of work would they have been willing to spend getting it?
4. Interpolate between extremes and find the threshold of cruxiness
Say I think Plan A will take a month and "roughly work." If I knew Plan A would take a year, would I still do it? Probably not. What about 6 months? What about 3 months? (at 3 months, maybe it starts to depend on how good Plan B is)
...
If I knew that literally 0 people were going to use this project, would it be worth it. If I knew 20,000 people were going to use it, maybe it'd totally be worth it. 100 people? Maybe it depends on who they. I work on Lesswrong.com. It's sometimes worth building something for 10 powerusers but they better be really good powerusers.
...
I'd clearly pay more than $10 or the result of Plan A. Would I pay $10,000? No? How about $1,000? How about $100?
(You don't have to exactly nail down the cruxy answer – you can find a range. "Well, it's clearly worth more than $50, and clearly less than $500. Between there idk.")
...
4. Check your intuitive predictions
Having now thought about it... what numbers do you actually expect to get in the end here? (Using your intuitions is another whole-other-post).
...
5. Gain more information
You don't have to go with whatever information you currently know. You can... just look stuff up. Or talk to more people. Or ask an LLM
...
6. Make a call, but keep an eye out
At the end of the day, you need to go decide what to do, even though you have imperfect information. But, it's good to keep an eye out for early warning signs that you were wrong.
The Fluent, Cruxy Predictions approach formalizes this, by making explicit predictions and following up on them later. But, if you don't have time to make explicit predictions and you're confused about how to actually operationalize them, you can still get a lot of mileage out of thinking through "what would change my decision here?" and keeping a vague look out either for information from the external world, or vague feelings of unease as you continue to do your plans.
Thinking through all of this helps grease the gears of your mind, to help you more readily notice that maybe reality is getting ready to punch you in the face in a slow, subtle way. See if you can look directly at that, get facepunched, say "whoops", and re-evalute.
...
Appendix: "Doublecrux" is not step 1.
This post was motivated by realizing "Fluent Cruxy Predictions" was bringing in 2.5 complex skills at once. You know what else brought in multiple skills at once? The original Double Crux technique, where the concept of cruxes was introduced to LessWrong.
Double Crux is a technique for you and a conversation partner to work together to find each of your cruxes and ideally find a single crux you both agree on. But multiplayer cruxfinding is vastly more complicated, and you're constantly veering into "have a regular debate, or argument" because that's a more natural frame for most people.
It's much more obvious what cruxes are for, if you're using them to change your own plans without worrying about justifying yourself to anyone else.
The "yourself" and the "plans" part are both important. It's a bit confusing to know what would change your mind about "do I believe in God?" or "are my political beliefs correct?" because those don't really push back on your day to day life. It's much more obvious for beliefs that directly plug into your actions, and you can see that a belief was not just mistaken, but that it mattered.
Often people find Fatebook fun for a couple weeks and I think it's worth it to periodically spend a sprint heavily focused on it, but don't stress to much about losing the Fatebook habit. You can come back to it periodically when it feels fun again. DO stress about losing the "identify your cruxes" habit.
Figuring out "what would actually change your mind?" is among the more important rationality skills.
Being able to do change your mind at all is a huge project. But, being able to do it quickly is much more useful than being able to do it at all. Because, then you can interleave the question "what would change my mind?" inside other, more complex planning or thinking.
I find this much easier to learn "for real" when I'm thinking specifically about what would change my mind about my decisionmaking. Sure, I could change my belief about abstract philosophy, or physics, or politics... but most of that doesn't actually impact my behavior. There's no way for reality to "push back" if I'm wrong.
By contrast, each day, I make a lot of decisions. Some of those decisions are expensive – an hour or more of effort. If I'm spending an hour on something, it's often worth spending a minute or two asking "is this the best way to spend the hour?". I usually have at least one opportunity to practice finding notable cruxes.
Previously: "Fluent, Cruxy Predictions" and "Double Crux"
While this post stands alone, I wrote it as a followup to Fluent, Cruxy Predictions, where I argued it's valuable to be not merely "capable" but "fluent" in:
Each step there has at least some friction to overcome.
But most of the nearterm value comes from #1, and vague hints of #2. The extra effort to turn #2 into something you can grade on Fatebook only pays off longterm.
So, I've updated you probably should focus on Finding Cruxes, and handwavily operationalizing them just enough to help you think in-the-moment, before worrying too much about integrating Fatebook into your life.[1]
Finding Cruxes
I sadly don’t have a unified theory of finding cruxes – for any given set of decisions, the way I compare them is kind of idiosyncratic to that situation. But there is a knack to it.
Finding a crux isn’t about finding “a principled legible reason that looks good on paper.” Finding a crux is about finding something you could learn, which would actually change your decisionmaking.
It should make sense, when you have more time to think about it and articulate it. But much of the time, you don’t have much time to figure that out - you have a lot of implicit information and your intuitions and background beliefs. It would be too clunky to make all that legible every time you make a decision. You need to learn to understand your gut intuitions.
The two broad skills that go into cruxfinding in my experience are:
Helping Reality Punch You In the Face
If you're looking for "what would change my decisionmaking", one of the clearest signals you can get is imagining a possible outcome that makes you go "oh, fuck", that feels like a gut-punch. Something that would make you go “woah, I really should have done something different
Startups vs Nonprofits
I got the concept of "reality punch you in the face" from my colleague Jacob. He had recently joined Lightcone, but previously had founded a company selling covid research to other countries to help them set policy.
At Lightcone, we're trying to solve longterm confusing problems using longterm confusing methods and it's unclear if anything works and what our feedback loops are supposed to be.
At Jacob's covid research company, every day he would wake up and feel punched in the face. He'd find out a contractor hadn't gotten something done in time. Or that the agencies he was trying to sell to suddenly pulled out. If he didn't build a good product fast enough to sell it in a fast-changing-marketplace, his business would die. Plans had clear stakes.
His story put the fear of god in me. I was struck by how little I knew if anything Lightcone did was useful. Does building new LessWrong features help? Does recruiting more AI safety researchers? Does holding bespoke events at Lighthaven with particular vibes help?
I dunno man. I have no idea, really. I had a vision of me spending years working on various projects that seemed like they'd maybe help, and then one day
waking up and finding out the AI had killed me and nothing I did had matteredI wouldn't wake up.It seemed like most important projects in the world were kind of like this. And the naive solution of "get yourself some obvious feedback loops" was a trap too. The obvious things like "get more AI safety researchers" indeed have turned out, in my opinion, to have subtle problems.
This gave me a goal: Figure out how to help reality punch you in the face as soon as possible when you're working on longterm, confusing projects.
Life Philosophies
Another example: once upon a time 10 years ago, I was arguing with @habryka about how important it was to be empathetic. I made a few different arguments about why empathy was important. "I think it's not just nicer, it'll make you more productive, you'll understand your employees better and they'll feel listened to, etc."
Habryka said "so, you're predicting that Elon Musk would become better at his job if he invested in becoming more empathetic?". And I felt a sudden yawning maw in front of me, like "oh shit, my nice abstract worldview about how being empathetic was good is suddenly falsifiable." I didn't know the answer, but I could visualize my estimate of how good Elon currently was at empathy (not very), and then imagine him putting work into changing that. And then imagine what would happen when he went into work the next day.
(This was back in the day when more people's first association with Elon was "ambitious, autistic workaholic who made the rockets fly.")
I might still give it >50% odds that learning empathy would be net positive for him, but I wouldn't give it 9-to-1 odds. And I wouldn't bet very heavily that this would be a better use of his name than most of the other alternative skills he could develop.
Toy Problems
Gut-punchy-cruxes can work in small toy examples:
When I go to practice thinking on some real world domain, I might spend awhile thinking about all sorts of clever-feeling strategies and never get clear confirmation about what helped. But if I'm making a plan in a "Thinking Physics" or "Baba is You" puzzle, where I'm trying to get the answer right on the first try, reality gives me an unambiguous answer to whether my strategies helped or not. The first couple times I did such an exercise, I spent a ton of time doing all sorts of clever-feeling thinking and then got the brutal answer "none of that mattered, actually."
Later, I learned the skill of asking during such an exercise "is this strategy actually going to pay off?", and sometimes realize "oh, this can't possibly work because there's a constraint to the puzzle that my current plan doesn't address at all."
The skill of cruxfinding is finding the equivalent thing for murkier, long-time-horizon plans.
Step by Step
Step 1: Get alternatives
You first need to have some actual alternative plans to whatever you were planning to do anyway. This is a whole other complex skill & blogpost.
But, a prompt for now is:
"Assume you looked back and found out your current decision definitely wasn't going to work out. What would you do instead?". The alternatives doesn't have to make sense or be justifiable, it just has to be a true fact about you that you'd actually go do them.
(I try to aim for having at least 3 alternative plans, one pretty different from the others, so that I'm not locked into the initial frame. If I'm currently rabbitholing on a particular project at work, two alternative plans might be "use some other strategy to accomplish the project" and "switch to some other project, or go talk to my boss about what I should actually be doing, or take a break.")
Step 2: Ask yourself random-ass prompts.
Okay, how do you get yourself face-punched as fast as possible?
I don't have an elegant way to search for such cruxes, they are very context dependent.
My solution so far is to ask myself some random-ass questions and feel around for ones that either feel juicy/informative, or I feel scared to think about. I don't yet have a shortcut for "try a bunch of them out, and develop a feel for what questions are relevant in the moment."
Some example gut-punchy questions are:
Some juicy/informative types of questions tend to be:
Some less intense prompts but often useful are:
3. Try to find scalar units.
Last week I was building some infrastructure to help with a project to deal with something annoying. "Will it succeed at making the thing less annoying?" is both vague and absolute in a way that's hard to reason about. How often would I be annoyed? How much of a timesink is that annoyance?
If I drove the annoyance cost down to 0, will that enable to me do some tasks more often that I'd otherwise avoid because they're too ughy? How many times would I actually do that previously-annoying task? What's the most it highest the answer could be? What's the lowest?
...
Sometimes I'm doing something to make me happy, or make someone else happy. How happy will I/they be? How much would they pay for it? How many days of work would they have been willing to spend getting it?
4. Interpolate between extremes and find the threshold of cruxiness
Say I think Plan A will take a month and "roughly work." If I knew Plan A would take a year, would I still do it? Probably not. What about 6 months? What about 3 months? (at 3 months, maybe it starts to depend on how good Plan B is)
...
If I knew that literally 0 people were going to use this project, would it be worth it. If I knew 20,000 people were going to use it, maybe it'd totally be worth it. 100 people? Maybe it depends on who they. I work on Lesswrong.com. It's sometimes worth building something for 10 powerusers but they better be really good powerusers.
...
I'd clearly pay more than $10 or the result of Plan A. Would I pay $10,000? No? How about $1,000? How about $100?
(You don't have to exactly nail down the cruxy answer – you can find a range. "Well, it's clearly worth more than $50, and clearly less than $500. Between there idk.")
...
4. Check your intuitive predictions
Having now thought about it... what numbers do you actually expect to get in the end here? (Using your intuitions is another whole-other-post).
...
5. Gain more information
You don't have to go with whatever information you currently know. You can... just look stuff up. Or talk to more people. Or ask an LLM
...
6. Make a call, but keep an eye out
At the end of the day, you need to go decide what to do, even though you have imperfect information. But, it's good to keep an eye out for early warning signs that you were wrong.
The Fluent, Cruxy Predictions approach formalizes this, by making explicit predictions and following up on them later. But, if you don't have time to make explicit predictions and you're confused about how to actually operationalize them, you can still get a lot of mileage out of thinking through "what would change my decision here?" and keeping a vague look out either for information from the external world, or vague feelings of unease as you continue to do your plans.
Thinking through all of this helps grease the gears of your mind, to help you more readily notice that maybe reality is getting ready to punch you in the face in a slow, subtle way. See if you can look directly at that, get facepunched, say "whoops", and re-evalute.
...
Appendix: "Doublecrux" is not step 1.
This post was motivated by realizing "Fluent Cruxy Predictions" was bringing in 2.5 complex skills at once. You know what else brought in multiple skills at once? The original Double Crux technique, where the concept of cruxes was introduced to LessWrong.
Double Crux is a technique for you and a conversation partner to work together to find each of your cruxes and ideally find a single crux you both agree on. But multiplayer cruxfinding is vastly more complicated, and you're constantly veering into "have a regular debate, or argument" because that's a more natural frame for most people.
It's much more obvious what cruxes are for, if you're using them to change your own plans without worrying about justifying yourself to anyone else.
The "yourself" and the "plans" part are both important. It's a bit confusing to know what would change your mind about "do I believe in God?" or "are my political beliefs correct?" because those don't really push back on your day to day life. It's much more obvious for beliefs that directly plug into your actions, and you can see that a belief was not just mistaken, but that it mattered.
Often people find Fatebook fun for a couple weeks and I think it's worth it to periodically spend a sprint heavily focused on it, but don't stress to much about losing the Fatebook habit. You can come back to it periodically when it feels fun again. DO stress about losing the "identify your cruxes" habit.