I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.

Sarah Constantin: I really liked this example of an introspective process, in this case about the "life problem" of scheduling dates and later canceling them: malcolmocean.com/2021/08/int…

Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.

SC: if you have done anything REMOTELY like training yourself to do it in 30 seconds, then you are radically smarter/more able/etc than me and all the other people who do slower introspective practices.

SC: I don't know whether to be impressed or to roll to disbelieve.

EY: I mean I suspect that this actually requires something like a fast perceptual view of minds as engines and thoughts as doing work and like actually draws on my mind design knowledge, but, even so, I ask: Do you constantly look back and ask "How could I have thought that faster?"

SC: No, I've never asked that.

EY: Okay, well, every time I'm surprised by reality I look back and think "What about my model and my way of thinking could I change that would have predicted that better, without predicting a bunch of other things worse?"

EY: When somebody at a MIRI workshop comes up with a math proof, I look over it and ask if there's a way to simplify it. Usually, somebody else does beat me to inventing a proof first; but if my intuition says it was too complicated, I often am first to successfully simplify it.

EY: And every time I complete a chain of thought that took what my intuition says was a lot of time, I look back and review and ask myself "How could I have arrived at the same destination by a shorter route?"

EY: It's not impossible that you have to be Eliezer Yudkowsky for this to actually work - I am never sure about that sort of thing, and have become even less so as time goes on - but if AI timelines were longer I'd tell somebody, like, try that for 30 years and see what happens.

EY: Man, now I'm remembering when I first started doing this consciously as a kid. I called it Shortening the Way, because a rogue rabbi had recently told me that "Kwisatz Haderach" was actually a reference to a Kabbalistic concept about teleportation, so that term was on my mind.

New to LessWrong?

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 3:03 AM

Bonus conversation from the root of the tree that is this Twitter thread:

Eliezer Yudkowsky: Your annual reminder that you don't need to resolve your issues, you don't need to deal with your emotional baggage, you don't need to process your trauma, you don't need to confront your past, you don't need to figure yourself out, you can just go ahead and do the thing.

Benquo: By revealed preferences almost no one wants to just go ahead and do the thing, even if they expect that things would go better for them if they did. Seems reasonable to try to figure out why that's the case and how to change it, starting with oneself.

Benquo: Most of this trying will be fake or counterproductive, for the same reasons people aren't doing the sensible object-level thing, but we don't get to assume or pretend our way out of a problem, we just get to investigate and think about it and try out various promising solutions.

Given my experiences with both TYCS-like methods and parts-work methods (which is what Benquo is likely proposing one invest in, instead), I'd recommend people invest more in learning and using parts-work techniques first, before they learn and try to use TYCS-like techniques.

The way I do this with my clients is that we train cognitive tools first, then find the resistance to those habits and work on it using parts work

Say more?

I usually explain my process these days to clients with the acronym LIFE

Learn New Tools Integrate Resistance Forge an Identity Express Yourself

Learn New Tools is cognitive-emotional strategies, of which TYCS is an example. Fwiw a some of TYCS is actually deliberate practice to discover cognitive strategies ( as compared to something like CFAR which extracts and teaches them directly), but the result is the same.

The important thing is to just have a clear tool, give people something they know they can use in certain situations, that works immediately to solve their problems.

But the thing is, people don't use them, because they have resistance. That's where parts work and other resistance integration tools come into play.

Even when thata done, there's still the issue that you don't automatically use the techniques. This is where forge an Identity comes in, where you use identity change techniques to make the way you see yourself be in alignment with a way of being that the technique brings out. (This is one thing TYCS gets wrong in my opinion, trying to directly reinforce the cognitive strategies instead of creating an identity and reinforcing the strategies as affirming that identity.)

Finally that identity needs to propogate to every area of your life, so there's not situations where you fail to use the technique and way of being. This is just a process of looking at each area, seeing where it's not in alignment with the identity, then deliberately taking an action to bring it to that area.

IME all of these pieces are needed to make a life change from a technique, although it's rarely as linear as I describe it.

In addition to "How could I have thought that faster?", there's also the closely related "How could I have thought that with less information?"

It is possible to unknowingly make a mistake and later acquire new information to realize it, only to make the further meta mistake of going "well I couldn't have known that!"

Of which it is said, "what is true was already so". There's a timeless perspective from which the action just is poor, in an intemporal sense, even if subjectively it was determined to be a mistake only at a specific point in time. And from this perspective one may ask: "Why now and not earlier? How could I have noticed this with less information?" 

One can further dig oneself to a hole by citing outcome or hindsight bias, denying that there is a generalizable lesson to be made. But given the fact that humans are not remotely efficient in aggregating and wielding the information they possess, or that humans are computationally limited and can come to new conclusions given more time to think, I'm suspicious of such lack of updating disguised as humility.

All that said it is true that one may overfit to a particular example and indeed succumb to hindsight bias. What I claim is that "there is not much I could have done better" is a conclusion that one may arrive at after deliberate thought, not a premise one uses to reject any changes to one's behavior.

Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.

I wouldn't mind seeing an annotated narrative or description of what that process of distilling a habit down into the parts which do the cognitive heavy lifting looks like

This seems critical. The description given is very vague relative to actual cognitive steps that could happen for specific conclusions. How anyone could "retrain" themselves in 30 seconds is something different than what we usually mean by training.

Eliezer wrote The 5-Second Level. 

If you identify a bad 5-Second step, 30 seconds give you six training runs where you can go through the step to train it.


I interpreted "retrain myself to perform only those steps over the course of 30 seconds" to mean that after training for n seconds/minutes/hours, he could solve an equivalent problem in 30 seconds (via the distilled steps). You seem to interpret it to mean that the training takes 30 seconds, and the length of time to solve the problem after training is unspecified. 

I don't know which it is, the wording seems ambiguous.

I would also like to see this. As it is, I’m not sure what the OP is even describing. (As noted in a sibling comment, description is very vague.)

If I take this claimed strategy as a hypothesis (that radical introspective speedup is possible and trainable), how might I falsify it? I ask because I can already feel myself wanting to believe it's true and personally useful, which is an epistemic red flag. Bonus points if the falsification test isn't high cost (e.g. I don't have to try it for years).


Eh, I feel like this is a weird way of talking about the issue.

If I didn't understand something and, after a bunch of effort, I managed to finally get it, I will definitely try to summarize the key lesson to myself. If I prove a theorem or solve a contest math problem, I will definitely pause to think "OK, what was the key trick here, what's the essence of this, how can I simplify the proof".

Having said that, I would NOT describe this as asking "how could I have arrived at the same destination by a shorter route". I would just describe it as asking "what did I learn here, really". Counterfactually, if I had to solve the math problem again without knowing the solution, I'd still have to try a bunch of different things! I don't have any improvement on this process, not even in hindsight; what I have is a lesson learned, but it doesn't feel like a shortened path.

Anyway, for the dates thing, what is going on is not that EY is super good at introspecting (lol), but rather that he is bad at empathizing with the situation. Like, go ask EY if he never slacks on a project; he has in the past said he is often incapable of getting himself to work even when he believes the work is urgently necessary to save the world. He is not a person with a 100% solved, harmonic internal thought process; far from it. He just doesn't get the dates thing, so assumes it is trivial.

Having said that, I would NOT describe this as asking "how could I have arrived at the same destination by a shorter route". I would just describe it as asking "what did I learn here, really".

I mean, yeah, they're different things.  If you can figure out how to get to the correct destination faster next time you're trying to figure something out, that seems obviously useful.

"Lesson overall" can contain idiosyncratic facts that you can learn iff you run into problem and try to solve it, you can't know them (assuming you are human and not AIXI) in advance. But you can ask yourself "how would someone with better decision-making algorithm solve this problem having the same information as me before I tried to solve this problem" and update your decision-making algorithm accordingly.


I've spent the past few weeks independently interested in this concept (before mesaoptimizer posted it, actually). I reread the Eliezer tweet while investigating "deliberate practice for solving Confusing Problems™"

I still have a lot of open questions on "how do you actually do this effectively?" and "how long does it take to pay off in 'you actually think faster?'". But I've at least transitioned from "I feel like there's no way I could have 'thought it faster'" to "I observe specific earlier moments where I failed to notice clues that could have pointed me at the right solution" and "I've identified skills I could have had that would have made it possible to identify and act on those clues."

I've personally gotten mileage from writing out in detail what my thought process was, and then writing out in detail "what's the shortest way I could imagine a superintelligence or someone 40 IQ points higher than me would have reliably done it?". The process currently takes me ~30 minutes.

A thing I haven't attempted yet is:

Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.

I'm interested in other people trying this and seeing if useful stuff falls out.

I’d like to see some of your answers to "what's the shortest way I could imagine a superintelligence or someone 40 IQ points higher than me would have reliably done it?". Including notes about the process you used before asking yourself that.

When I ask myself the question, it’s not generative at all.


I thought everybody did this. It seems like the only way to get better at certain things like computer programming. Every time you do something and it takes a while (including realize something), you try and figure out how you could've done the cognitive labor a little quicker.

I’ve gotten better at computer programming (as demonstrated by the fact that I used to not know how to code and now I can code pretty well), and not only have I never done anything that sounds like this, I am not sure I even understand what it would mean to do this. (Is it just “optimize your workflow on a task”? If so, then it seems very mis-decribed. Or is it something else?)

  1. Do a task that feels like it should have taken 3 hours in 6 hours
  2. Think about what mistakes you made (maybe I should have tested this functionality, before attempting to build that entire system out)
  3. Turn it into a larger lesson (if cheap, create small test programs instead of writing 2500 new lines and debugging all of them in one pass)
  4. Apply the larger lesson going forward

I am not sure what you mean by step #1 (when something “feels like it should” take some amount of time, but ends up taking more time, it’s generally not because I made some mistake, but rather because my initial “feeling” turned out to be mistaken about how much time the task “should” take—which is not shocking, as such “feelings” are necessarily probabilistic).

The rest of it seems like… learning from mistakes and optimizing your practices/workflows/etc. based on experience. Is that what you’re talking about?

I confess that I’m still confused about how any of this could be described as “how could I have thought that faster”. Eliezer writes about “retrain[ing] [himself] to perform only those steps over the course of 30 seconds”, and… that just does not seem like it has anything to do with what you’re describing? Am I missing some analogy here, or what?

I also thought that it was very common. I would say it's necessary for competition math.

How often do you do this per week?

Not everybody does this. Another way to get better is just to do it a lot. It might not be as efficient, but it does work.

Self-plug, but I think this is similar to the kind of reflection process I tried to describe in "Kolb's: an approach to consciously get better at anything".

Isn't this a normal thing all humans do? "What did I intend, what actually happened, where can I Improve?" along with a quick cost-benefit analysis.

I think difference between what you are describing and what is meant here is captured in this comment:

There's a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say "I'm such a fool; I should have bet 23".

More useful would be to say "I'm such a fool; I should have noticed that the EV of this gamble is negative." Now at least you aren't asking for magic lottery powers.

Even more useful would be to say "I'm such a fool; I had three chances to notice that this bet was bad: when my partner was trying to explain EV to me; when I snuck out of the house and ignored a sense of guilt; and when I suppressed a qualm right before placing the bet. I should have paid attention in at least one of those cases and internalized the arguments about negative EV, before gambling my money." Now at least you aren't asking for magic cognitive powers.


Thanks for giving a useful example. 

For most people I guess it would be better to delete the phrase "I'm such a fool" from the evaluation, in order to avoid self-blame that becomes a self-image.

This sure does update me towards "Yudkowsky still wasn't good enough at pedagogy to have made 'teach people rationality techniques' an 'adequately-covered thing by the community'".

Do you mean "If EY was good enough we would knew this trick many years ago"?

That's technically a different update from the one I'm making. However, I also update in favor of that, as a propagation of the initial update. (Assuming you mean "good enough" as "good enough at pedagogy".)

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?