A large chunk - plausibly the majority -  of real-world expertise seems to be in the form of illegible skills: skills/knowledge which are hard to transmit by direct explanation. They’re not necessarily things which a teacher would even notice enough to consider important - just background skills or knowledge which is so ingrained that it becomes invisible.

I’ve recently noticed a certain common type of illegible skill which I think might account for the majority of illegible-skill-value across a wide variety of domains.

Here are a few examples of the type of skill I have in mind:

  • While operating a machine, track an estimate of its internal state.
  • While talking to a person, track an estimate of their internal mental state - emotions, engagement, thoughts/worries, true motivations, etc.
  • While writing an algorithm, track a Fermi estimate of runtime.
  • While reading or writing math, track a prototypical example of what the math is talking about.
  • While playing a competitive game, track an estimate of the other players’ plans, intentions and private information
  • While writing, track an estimate of the mental state of a future reader - confusion, excitement, eyes glossing over, etc.
  • While reasoning through a difficult search/optimization problem, track an estimate of which constraints are most taut.
  • While working on math, physics, or a program, track types/units
  • While working on math, physics, or a program, track asymptotic behavior
  • While in conversation, track ambiguous tokenization for potential jokes.
  • While presenting to a crowd, track engagement level.
  • While absorbing claims/information, track an estimate of the physical process which produced the information, and how that process entangles the information with physical reality.

The common pattern among all these is that, while performing a task, the expert tracks some extra information/estimate in their head. Usually the extra information is an estimate of some not-directly-observed aspect of the system of interest. From outside, watching the expert work, that extra tracking is largely invisible; the expert may not even be aware of it themselves. Rarely are these mental tracking skills explicitly taught. And yet, based on personal experience, each of these is a central piece of performing the task well - arguably the central piece, in most cases.

Let’s assume that this sort of extra-information-tracking is, indeed, the main component of illegible-skill-value across a wide variety of domains. (I won’t defend that claim much; this post is about highlighting and exploring the hypothesis, not proving it.) What strategies does this suggest for learning, teaching, and self-improvement? What else does it suggest about the world?

Pay Attention To Extra Information Tracking

I had a scheme, which I still use today when somebody is explaining something that I’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball) – disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, ‘False!’

- Feynman

A lot of people have heard Feynman’s “hairy green ball thing” quote. It probably sounds like a maybe-useful technique to practice, but not obviously more valuable than any of a dozen other things.

The hypothesis that extra-information-tracking is the main component of illegible-skill-value shines a giant spotlight on things like Feynman’s examples technique. It suggests that a good comparison point for the value of tracking a prototypical example while reading/writing math is, for instance, the value of tracking the probable contents of opponents’ hands while playing poker.

More generally: my guess is that most people reading this post looked at the list of examples, noticed a few familiar cases, and thought “Oh yeah, I do that! And it is indeed super important!”. On the other hand, I’d also guess that most people also saw some unfamiliar cases, and thought “Yeah, I’ve heard people suggest that before, and it sounds vaguely useful, but I don’t know if it’s that huge a value-add.”.

The first and most important takeaway from this post is the hypothesis that the unfamiliar examples are about as important to their use-cases as the familiar examples. Take a look at those unfamiliar examples, and imagine that they’re as important to their use-cases as the examples you already use.

Ask “What Are You Tracking In Your Head?”

Imagine that I’m a student studying under Feynman. I know that he’s one of the great minds of his generation, but it’s hard to tell which things I need to pick up. His internal thoughts are not very visible. In conversation with mathematicians, I see him easily catch errors in their claims, but I don’t know how he does it. I could just ask him how he does it, but he might not know; a young Richard Feynman probably just implicitly assumes that everyone pictures examples in their head, and has no idea why most people are unable to easily catch errors in the claims of mathematicians!

But if I ask him “what were you tracking in your head, while talking to those mathematicians?” then he’s immediately prompted to tell me about his hairy green ball thing.

More generally: for purposes of learning/teaching, the key question to ask of a mentor is “what are you tracking in your head?”; the key question for a mentor to ask of themselves is “what am I tracking in my head?”. These extra-information-tracking skills are illegible mainly because people don’t usually know to pay attention to them. They’re not externally-visible. But they’re not actually that hard to figure out, once you look for them. People do have quite a bit of introspective access into what extra information they’re tracking. We just have to ask.

Returns to Excess Cognitive Capacity

Mentally tracking extra information is exactly the sort of technique you’d expect to benefit a lot from excess cognitive capacity, i.e. high g-factor. Someone who can barely follow what’s going on already isn’t going to have the capacity to track a bunch of other stuff in parallel.

… which suggests that extra-information-tracking techniques are particularly useful investments for people with unusually high g. (Hint: this post is on LW, so “unusually high g” probably describes you!) They’re a way to get good returns out of excess cognitive capacity.

The same argument also suggests a reason that teaching methods aren’t already more focused on mentally tracking extra information: such techniques are probably more limited for the median person. On the other hand, if your goal is to train the great minds of the next generation, then figuring out the right places to invest excess cognitive capacity is likely to have high returns.

Other Examples?

Finally, the obvious question: what extra information do you mentally track, which is crucial to performing some task well? If the hypothesis is right, there’s probably high-value mental-tracking techniques which some, but not all, people reading this already use. Please share!


New Comment
75 comments, sorted by Click to highlight new comments since: Today at 3:51 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This post is probably right that illegible skills rely on tracking non-obvious bits of information. But I don't think that discovering that info is as simple as asking "What Are You Tracking in Your Head". Remember that there's a lot of inferential distance between the you and an expert, and they've likely forgotten all that you don't know. 

Thankfully the problem of getting tacit knowledge out of someone has a growing literature on it that is quite useful. The field of  Naturalistic Decision Making developed some techniques to do this, one of which is fairly simple. It is called Applied Cognitive Task Analysis. Here's a summary of it from CommonCog [1]:

There are four techniques in ACTA, and all of them are pretty straightforward to put to practice:

  1. You start by creating a task diagram. A task diagram gives you a broad overview of the task in question and identifies the difficult cognitive elements. You'll want to do this at the beginning, because you'll want to know which parts of the task are worth focusing on.
  2. You do a knowledge audit. A knowledge audit is an interview that identifies all the ways in which expertise is used in a domain, and provides examples based on
... (read more)
On first glance, CommonCog looked kinda MBA-flavored bullshitty (especially alongside the ACTA thing, which also sounds MBA-flavored bullshitty). But after reading a bit, it is indeed pretty great! Thanks for the link.
I'd be very sceptical of applying something like this on experts in a rich-domain/somewhat-pre-paradigmatic field like, say, conceptual alignment. Their expertise is their particular set of tools. And in a rich domain like this, there are likely to be many other tools that lets you work on the problems productively. Even if you concluded that the paradigmatic tools seem most suited for the problems, you may still wish to maximise the chance that you'll end up with a productively different set of tools, just because they allow you to pursue a neglected angle of attack. If you look overmuch to how experts are doing it, you'll Einstellung yourself into their paradigm and end up hacking at an area of the wall that's proven to be very sturdy indeed.
For pre-paradigmatic fields, I agree that the insights you extract have a good chance of not being useful. But if you some people who are talking past each other because they can't understand each others viewpoints, then I would expect this sort of thing to help make both groups legible to one another. Which is certainly true of the AI safety field. And communicating each other's models is precisely what is advocating now, and by the looks of it, not much progress has been made.  To me, it is pretty plausible that Yudkowsky's purported knowledge is tacit, given his failures to communicate it so far. Hence, I think it would be valuable if someone took tried ACTA on Yudkowsky. He seems to be focusing on communicating his views and giving his brain a break, so now would be a good time to try.

I was thinking you had all of mine already, since they're mostly about explaining and coding. But there's a big one: When using tools, I'm tracking something like "what if the knife slips?". When I introspect, it's represented internally as a kind of cloud-like spatial 3D (4D?) probability distribution over knife locations, roughly co-extentional with "if the material suddenly gave or the knife suddenly slipped at this exact moment, what's the space of locations the blade could get to before my body noticed and brought it to a stop?". As I apply more force this cloud extends out, and I notice when it intersects with something I don't want to get cut. (Mutatis mutandis for other tools of course. I bet people experienced with firearms are always tracking a kind of "if this gun goes off at this moment, where does the bullet go" spatial mental object)

I notice I'm tracking this mostly because I also track it for other people and I sometimes notice them not tracking it. But that doesn't feel like "Hey you're using bad technique", it feels like "Whoah your knife probability cloud is clean through your hand and out the other side!"

I was explicitly taught to model this physical thing in a wood carving survivalist course.

Seems at least partially related to cognitive apprenticeship, a type of teaching that aims at explicitly making the teacher's "thinking visible", so that the pupils can find out what it is that the teacher is tracking at any moment when solving the problem. For instance, they might carry out an assignment in front of pupils and try to explicitly speak out loud their thoughts while doing it.

For instance, when writing an essay:


(Suggested by students)

Write an essay on the topic “Today’s Rock Stars Are More Talented than Musicians of Long Ago.”


I don’t know a thing about modern rock stars. I can’t think of the name of even one rock star. How about, David Bowie or Mick Jagger… But many readers won’t agree that they are modern rock stars. I think they’re both as old as I am. Let’s see, my own feelings about this are… that I doubt if today’s rock stars are more talented than ever. Anyhow, how would I know? I can’t argue this… I need a new idea… An important point I haven’t considered yet is… ah… well… what do we mean by talent? Am I talking about musical talent or ability to entertain—to do acrobatics? Hey, I may have a way into this topic. I could develop t

... (read more)

I tried to follow my own thoughts on the polynomial example. They were pretty brief; the whole problem took only a few seconds. Basically:

  • Whelp, roots are a PITA
  • Can I transform Q(x) in a way which swaps the order of the coefficients?
    • Pattern match: yup! I've done that before. Divide by x^n.
  • Oh I see, roots of one will be reciprocals of roots of the other.

... so I guess +1 point for the "bag of tricks" model of expertise.

There are a lot of 1/4 instead of 1/r in the formulas (I guess you wrote some of them with 1/4 initially but then replaced but overlooked some).

Kinda surprised you didn't mention purpose-tracking, for while you're trying to do a thing--any thing. Arguably the most important skill I acquired from the Sequences, and that's a high bar.

"Your sword has no blade. It has only your intention. When that goes astray you have no weapon."

In resource management games, I typically have a set of coefficients in my head for the current relative marginal values of different resources, and my primary heuristic is usually maximizing the weighted sum of my resources according to these coefficients.

In combat strategy games, I usually try to maximize (my rate of damage) x (maximum damage I can sustain before I lose) / (enemy rate of damage) x (damage I need to cause before I win).

These don't seem especially profound to me.  But I've noticed a surprising number of video games that make it distressingly hard to track these things; for instance, by making it so that the data you need to calculate them is split across three different UI screens, or by failing to disclose the key mathematical relationships between the public variables and the heuristics I'm trying to track.  ("You can choose +5 armor or +10 accuracy.  No, we're not planning to tell you the mathematical relationship between armor or accuracy and observable game outcomes, why do you ask?")

It's always felt odd to me that there isn't widespread griping about such games.

As a result of reading this post, I have started explicitly tracking two hypotheses ... (read more)

8Linda Linsefors9mo
I'm very certain that you hypothesis are correct. Most people play to have fun, not to win. Winning is instrumental to fun, but for most people it is not worth the cost of doing some math, which is anti-fun. I like math in general, but I still would not make this explicit calculation, because it is the wrong type of math for me to enjoy. (Not saying it is wrong for you to enjoy it, just that it's unusual.) I think that making the game design such that it is hard or impossible to do the explicit math is a feature. Most people don't want to do the math. The math is not supposed to be part of the game. Most people don't want the math nerds to have that advantage, because then they'll have to do the math too, or loose.
That seems like it could only potentially be a feature in competitive games; yet I see it all the time in single-player games with no obvious nods to competition (e.g. no leaderboards).  In fact, I have the vague impression games that emphasize competition tend to be more legible--although it's possible I only have this impression from player-created resources like wikis rather than actual differences in developer behavior.  (I'll have to think about this some.) Also, many of these games show an awful lot of numbers that they don't, strictly speaking, need to show at all.  (I've also played some games that don't show those numbers at all, and I generally conclude that those games aren't for me.)  Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player. Also, in several cases the developers have been willing to explain more of the math on Internet forums when people ask them.  Which makes it seem less like a conscious strategy to withhold those details and more that it just didn't occur to them that players would want them. There certainly could be some games where the developers are consciously pursuing an anti-legible-math policy, but it seems to me that the examples I have in mind do not fit this picture very well.
1Causal Chain9mo
 > Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player. When I imagine a game which offers "+armor" or "+accuracy" vs a game which offers "+5 armor" or "+10 accuracy", the latter feels far more comfortable even if I do not intend to do the maths. I suspect it gives something for my intuition to latch onto, to give me a sense of scale.
Do you mean that it's more comfortable because you feel it provides some noticeable boost to your ability to predict game outcomes (even without consciously doing math), or is it more of an aesthetic preference where you like seeing numbers even if they don't provide any actual information?  (Or something else?) If you're applying a heuristic anything like "+10 accuracy is probably bigger than +5 armor, because 10 is bigger than 5", then I suspect your heuristic is little better than chance.  It's quite common for marginal-utility-per-point to vary greatly between stats, or even within the same stat at different points along the curve. If you're strictly using the numbers to compare differently-sized boosts to the same stat (e.g. +10 accuracy vs +5 accuracy) then that's reasonably safe.
2Causal Chain9mo
The improvement to my intuitive predictive ability is definitely a factor to why I find it comforting, I don't know what fraction of it is aesthetics, I'd say a poorly calibrated 30%. Like maybe it reminds me of games where I could easily calculate the answer, so my brain assumes I am in that situation as long as I don't test that belief. I'm definitely only comparing the sizes of changes to the same stat. My intuition also assumes diminishing returns for everything except defense which is accelerating returns - and knowing the size of each step helps inform this.
That seems opposed to what Linda Lisefors said above:  You like the idea that you could calculate an answer if you chose to, while Linda thinks the inability to calculate an answer is a feature. (Nothing wrong with the two of you wanting different things.  I am just explicitly de-bucketing you in my head.) My model says that the trend in modern games is towards defense having diminishing returns (or at least non-escalating returns), as more developers become aware of that as a thing to track.  I think of armor in WarCraft 3 as being an early trendsetter in this regard (though I haven't gone looking for examples, so it could be that's just the game I happened to play rather than an actual trendsetter). I am now explicitly noticing this explanation implies that my model contains some sort of baseline competence level of strategic mathematics in the general population that is very low by my standards but slowly rising, and that this competence is enough of a bottleneck on game design that this rise is having noticeable effects.  This seems to be in tension with the "players just don't want to multiply" explanation.
You wouldn't have that in reality either, and in reality, the relationship would be even more complicated. I think a fair compromise would be to give you a simplified relationship like "+1 armor increases the damage it can absorb by 20%" when it is more complicated than that (min/max damage, non-linearity).
Many years ago, I used to think it would be great if a game gave you just the information that you would have had "in reality" and asked you to make decisions based on what "would realistically work". After trying to play a bunch of games this way, I no longer think this is a sensible approach.  Game rules necessarily ignore vast swathes of reality, and there's no a priori way to know what they're going to model and what they're going to cut.  I end up making a bunch of decisions optimized around presumed mechanics that turn out not to exist, while ignoring the ones that actually do exist, because the designer didn't happen to model exactly the same things that I guessed they'd model. Fundamentally, losing a game because you made incorrect guesses about the rules is Not Fun.  (For me.) My current philosophy is that rules should usually be fully transparent, and I've found that any unrealism resulting from this really doesn't bother me.  My primary exception to this philosophy is if the game is specifically designed so that figuring out the rules is part of the game, which I think can be pretty neat if done well, but requires a lot of work to do well. In most of the games I've played where the rules were not transparent, it didn't look (to me) like they were trying to build gameplay around rules-discovery, or carefully calculating the optimum amount of opacity; it looked (to me) like they just ignored the issue, and the game (in my opinion) suffered for it. Also, "in reality", if there were important stakes, and you didn't know the rules, you'd probably do a lot of experimentation to learn the rules.  You can do this in games, too, but in most games this is boring and I'd rather just skip to the results.

Another example from Feynman: Besides the object level of what the math or physics described symbolically, he was tracking what that meant in real life. Not as obvious as you'd think. See, e.g., the anecdote about Brewster's angle. The most common form is Guessing the Teacher's Password - which happens if there is no spare capacity to track what all these symbols mean in real life. Tracking the symbols is difficult enough if you are at the limits of your ability (though it might also result from investing as little effort as possible to pass).   

One other thing I could never get them to do was to ask questions. Finally, a student explained it to me: "If I ask you a question during the lecture, afterwards everybody will be telling me, `What are you wasting our time for in the class? We're trying to learn something. And you're stopping him by asking a question'." It was a kind of one-upmanship, where nobody knows what's going on, and they'd put the other one down as if they did know. They all fake that they know, and if one student admits for a moment that something is confusing by asking a question, the others take a high-handed attitude, acting as if it's not confusing at all, telling him that he's wasting their time.
I explained how useful it was to work together, to discuss the questions, to talk it over, 
but they wouldn't do that either, because they would be losing face if they had to ask someone else. It was pitiful! All the work they did, intelligent people, but they got themselves into this funny state of mind, this strange kind of self-propagating "education" which is meaningless, utterly meaningless!

  • Feynman

An all too common folly.

3Alex Vermillion8mo
Damn, I just used up half a cup of sugar and the only result I got was learning sugar packs into the grooves of my pliers INCREDIBLY WELL. I will have to try again later, maybe after making some larger crystals (so that the pliers are capable of breaking them apart). Edit: Dissolving the sugar (in coldish water, just by stirring) and then letting that dry worked! Little greenish flashes. Fun
Should this reply have gone somewhere else? I don't get it. UPDATE: Ah, now I remember it. +1 for going out and actually doing the experiment!
6Alex Vermillion8mo
The link on "anecdote about Brewster's angle" goes to a story about Richard Feynman contains the paragraphs:

Nice post!

One of my fears is that the True List is super long, because most things-being-tracked are products of expertise in a particular field and there are just so many different fields.


  • In product/ux design, tracking the way things will seem to a naive user who has never seen the product before.
  • In navigation, tracking which way north is.
  • I have a ton of "tracking" habits when writing code:
    • types of variables (and simulated-in-my-head values for such)
    • refactors that want to be done but don't quite have enough impetus for yet
    • loose ends, such as allocated-but-not-freed resources, or false symmetry (something that looks like it should be symmetric but isn't in some critical way), or other potentially-misleading things that need to be explained
    • [there are probably a lot more of these that I am not going to write down now]

I could imagine a website full of such lists, categorized by task or field. Could imagine getting lost in there for hours...

While writing, track an estimate of the mental state of a future reader - confusion, excitement, eyes glossing over, etc.

This may be true if you write a scientific paper, an essay or a non-fiction book. As a professional writer, when I write a novel, I usually don't think about the reader at all (maybe because, in a way, I am the reader). Instead, I track a mental state of the character I'm writing about. This leads to interesting situations when a character "decides" to do something completely different from what I intended her to do, as if she had her own will. I have heard other writers describe the same thing, so it seems to be a common phenomenon. In this situation, I have two options: I can follow the lead of the character (my tracking of her mental state) and change my outline or even ditch it completely, or I can force her to do what the outline says she's supposed to do. The second choice inevitably leads to a bad story, so tracking the mental state of your characters indeed seems to be essential to writing good fiction. 

I assume that readers do a similar thing, so if a character in a book does something that doesn't fit the mental model they have in mind, they often find it "unbelievable" or "unrealistic", which is one of the reasons while "listen to your characters" seems to be good advice while writing.

Not type tracking seems to be the thing that makes people susceptible to bad philosophical intuition pumps. In particular, that the referent of a token shifts between a proposition being used at different points in the problem.

5Alex Vermillion8mo
I'm actually amazed how little it seems that most people track the definition of words in a conversation to see if they're changing. Something like the points made in Arguing "By Definition" [https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb/p/cFzC996D7Jjds3vS9] or Scott Alexander's popularization of the term "Motte and Bailey" [https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/] should be obvious. When someone makes one of these arguments to me, I am confused what is literally going on in their head. Unless the speaker does not care if their argument is sound, I have no map of what it is like to expect the switcheroo to work. In my brain, I resolve words into concepts, but it seems that 2 concepts that share the same symbols when written down are genuinely confusing to many people, suggesting this is a separate skill?
3Alex Vermillion8mo
Goofus: "Let us, for this argument, define 'horse' to mean 'human'." Gallant: "Alright." Goofus: "So you accept then, that humans should wear horseshoes?" Gallant: "What??!"

In Advanced Driving courses a key component was (and may still be -it’s been awhile) commentary driving. You sit next to an instructor and listen to them give a commentary on everything they are tracking, for instance other road users, pedestrians, road signs,  bends, obstacles, occluders of vision etc; and how these observations affect their decision making, as they drive.  Then you practice doing the same, out loud, and, ideally, develop the discipline to continue practising this after the course. I found this was a very effective way of learning from an expert, and I’m sure my driving became the safer because of it.

I have a couple frameworks that seem to fit into this:

One is the Greek word "kairos," which means... something kinda like "right now," but in the context of rhetoric means something more like "the present environment and mood." A public speaker, when giving a speech, should consider the kairos and tailor their speech accordingly. This cashes out in stuff like bands yelling out, "How are you tonight, Houston?!" or a comedian riffing off of a heckler. It's the thing that makes a good public speaker feel like they're not just delivering a canned speech they'v... (read more)

My comment here is a bit narrow, but re

  • While working on math, physics, or a program, track types/units

A lot of people get surprised at how quickly and easily I intuit linear algebra things, and I think a major factor is this. For linear algebra, you can think of vector spaces as being the equivalent of types/units. E.g. the rotation map for a diagonalization maps between the original vector space and the direct sum of the eigenvectors. Sort of.

It's always the first question I ask when I see a matrix or a tensor - what spaces does it map between?

4Oliver Sourbut9mo
Nice callout. I definitely think that 'typeful thinking' (units and dimensions) is a massive boost in mathematics, computer science and philosophy. One hypothetical reason is that knowing 'types' means knowing 'what you can do with it' (in particular, the manipulations you've done or witnessed on like-typed things before become generators of new insight). I think this is at least one piece of a description of how we humans do concept abstraction and recomposition in mundane and intellectual situations alike.
It has always bewildered me how you can represent multi-dimensional concepts in a two dimensional array of numbers. The position of a vector/matrix by itself can be arbitrary, but when you apply an operation on it with another vector/matrix, that's when the positions of the variables become interlocked and fixed. Then I realized the 2D array is just a form of organization rather than having intrinsic meaning regarding the vector/matrix itself.

Tracking an estimate of how warm food is while being cooked and how its consistency changes (ex. what the bottom of a slice of eggplant looks like without flipping it over).

Precisely estimating the note I will sing before singing it. I'm never totally accurate, but I find it extremely helpful in order to become more accurate.

Tracking an estimate of how my/another's body will feel after I massage it in an area/move it in a way.

Tracking an estimate of the risk of germs on my hands.

Tracking an estimate of the line that is at the center of my body's mass and r... (read more)

Curated. I think figuring out how to transfer hidden/illegible skills is a major bottleneck and like this post for digging into that. 

I do agree with Algon's comment that simply asking an expert what they're tracking may not be good enough – some expert's brains seem nicely legible and accessible to themselves, sometimes they're tracking so much stuff it's  hard to articulate.

There is an interview technique called Experiential Array which is designed to pull out this sort of information (and some other stuff too). Matt Goldenberg conducted this type of interview on me on the topic of designing and running events. This experience gave me the ability to communicate the invisible parts of event design.

Read here for more details 

Great post. Would add as an example: "While thinking about something and trying to figure out your viewpoint on it, track internal feelings of cognitive dissonance and confusion"

As a complement or intro to this technique, I find it helpful to create checklists. This helps me identify the most important items to track. I can either do it in my head, or externally via the checklist. It's often easy to come up with a reasonable checklist if you can define the topic specifically enough. Once I've created a checklist, and worked with it enough to commit it to memory, I find that new relevant information is easy to synthesize. If I encounter new information with no checklist, on the other hand, it's very hard for me to remember or make sense of it.

The world is full of scale-free regularities that pop up across topics not unlike 2+2=4 does. Ever since I learned how common and useful this is, I've been in the habit of tracking cross-domain generalizations. That bit you read about biology, or psychology, or economics, just to name a few, is likely to apply to the others in some fashion.

ETA: I think I'm also tracking the meta of which domains seem to cross-generalize well. Translation is not always obvious but it's a learnable skill.

I'm a pretty good poet. I usually don't share my poetry except with close friends, but take my word for it, my poetry is good enough that I think the majority of people who heard it would like it. What am I tracking when I write a poem?

Well, it happens almost automatically - if I have inspiration, the poem just comes out; if I don't have inspiration, I can sort of try to write something but it doesn't work. So it's a partly subconscious process already and not necessarily something that can be analyzed like this; but I know at the very least I am tracking ... (read more)

Hmm, I don't think this kind of tacit knowledge and skills is at all obvious to the holder. In most cases it's like asking a centipede how exactly it walks. Feynman was unusually introspective about this, not an easy example to follow for mere mortals. 

A lot of items in your list are about modeling other agents and yourself. In an embedded agency abstraction hierarchy it would be close to the top (model the general environment, model other agents, model self), so probably a recent evolutionary development, not very well entrenched into the genome, that's why we have trouble "just doing it" and need to introspect to make it make sense.

Finally, the obvious question: what extra information do you mentally track, which is crucial to performing some task well?

I track the age of data.

Here are a couple examples of how this is helpful:

  1. Wikipedia has case counts by country for the 2022 monkeypox outbreak. Portugal was one of the leading countries for a while in number of confirmed cases, but it has since been surpassed by others. However, on closer inspection, the numbers for Portugal haven't been updated since the 7th of July, 5 days ago. In the context of exponential growth, that matters a lot
... (read more)

While absorbing claims/information, track an estimate of the physical process which produced the information, and how that process entangles the information with physical reality.

Can you give an example?

When I read a technical paper about an experiment/study, I track in the back of my head a best-guess of what was actually going on during the experiment/study, separate from the authors' claims and analysis. So e.g. "ok, the authors sure do seem to think Y happened, so maybe Y happened, but what else would make the authors think Y happened?". Usually this includes things like "obviously X, Y, Z would be confounders", and then checking whether the authors controlled for those things. Or "maybe the person doing this part was new to the technique and they just fucked up the experiment?". Or "they say in the abstract that they controlled for X, but the obvious way of controlling for X would not actually fully control for it". Or "this is one of those fields where the things-people-say-happened are mostly determined by political flavor, and basically not coupled to observation". Etc. More generally, when applied reflectively, "track an estimate of the physical process which produced the information, and how that process entangles the information with physical reality" is just the fundamental technique of epistemic rationality: what do you think you know and how do you think you know it? [https://www.lesswrong.com/posts/NhQju3htS9W6p6wE6/stuff-that-makes-stuff-happen]

An "isthmus" and a "bottleneck" are opposites. An isthmus provides a narrow but essential connection between two things (landmass, associations, causal chains). A bottleneck is the same except the connection is held back by its limited bandwidth. In the case of a bottleneck, increasing its bandwidth is top priority. In the case of an isthmus, keeping it open or discovering it in the first place is top priority.

I have a habit of making up pretty words for myself to remember important concepts, so I'm calling it an "isthmus variable" when it's the thing you ... (read more)

1Alexander Gietelink Oldenziel7mo
I like the word and I like the idea of an opposite to "bottleneck"

I track my confidence in a given step of a hypothesised chain of mathematical reasoning via a heuristic along the lines of “number of failed attempts at coming up with a counterexample”.

Another immediate question is "for what tasks you don't have to do that?", partly because then one can ask the following question of why. For example, I think now that one doesn't have to track extra information when feeding livestock. (A not-too-variable time-consuming routine.) But I haven't yet really tracked what I do when I do that.

This post reminded me of the exercises in Calibrating with Cards, a post which very nicely advises what to pay attention to during magic practice.

minor feedback...

Mentally tracking extra information is exactly the sort of technique you’d expect to benefit a lot from excess cognitive capacity, i.e. high g-factor. Someone who can barely follow what’s going on already isn’t going to have the capacity to track a bunch of other stuff in parallel.

note that as one practices tracking something, the effort needed to track it goes down.

i don't think it makes sense to think of it like needing excess cognitive capacity to track things. i think our skill improves to the point of needing little to no excess cognitive capacity. so we only need excess cognitive capacity for new things we want to track. 

3Linda Linsefors9mo
That only works for task where you get to do a similar enough thing, enough times. This seems true for driving, but less so for most types of research.  My capacity to track information in my head varies from day to day, depending on mood, sleep, etc. I can notice a clear difference in what I can and can't do depending on this. When I have more limited mental capacity, I can still absorb facts, but I struggle to follow complex reasoning or draw independent conclusions (e.g. if this fact is true, what does that predict about the world?).
3Rami Rustom9mo
and then there's the possibility of slowing down the activity we're doing (or even chopping up the activity into separate phases so you can track things in-between the phases), allowing for more capacity to be used for tracking new things. 

It may be critical to note that tracking estimates of the internal states of other entities often feels like just having a clue about what's going on. If someone asks us how we came to our intuitions, without careful introspection, we might answer with "I just know" or "I pay attention is how!" or similar.

To unpack a mundane example, here's a somewhat rambly account of some of what I'm tracking in my head while I operate a motor vehicle:

When I'm driving, I don't actively scan through all the sounds and smells and tactile events and compare them with past e... (read more)

Driving is the most complex and demanding thing the median US resident does on a regular basis. Some jobs, like playing sports professionally, surgery, courtroom lawyer, high-end chef are probably also demanding, I've never done any of those, but they seem like good examples of needing to monitor all sorts of different things and adjust to changes. No comment on how cognitively taxing they are compared to driving.
Interesting point, I had never thought of it before.
Belatedly, it occurs to me that all that is for highway driving. Local driving requires a whole different model. Though many of the inputs come from the same places, the processing is often entirely different.

I  started writing down things I am tracking.

I actually never realized I am tracking so many things.

The problem and issue is, I rarely remember or know what to do with the tracked information.

Lets say I am trying to be engaging and have a discussion.

There could be a number of things to track, from motives, meanings, or specific reasons something is said.

Other thing to track is filling in the gaps. Lets say someone says something incomplete, one should when engaged fill in the gaps and ask question or find a way to follow up.

Another thing is to know yo... (read more)

This reminds me of dual N-back training. Under this frame, dual N-back would improve your ability to track extra things. It's still unclear to me whether training it actually improves mental skills in other domains.

When thinking about a physics problem or physical process or device, I track which constraints are most important at each step. This includes generic constraints taught in physics classes like conservation laws, as well as things like "the heat has to go somewhere" or "the thing isn't falling over, so the net torque on it must be small".

Another thing I track is what everything means in real, physical terms. If there's a magnetic field, that usually means there's an electric current or permanent magnet somewhere. If there's a huge magnetic field, that usual... (read more)

The extent to which you can benefit from asking what someone is tracking in their head, and the degree to which they can usefully explain it to you, will depend critically on how much information, basic to the topic at hand, you two of you already share.

You can learn more from a master, using this technique, the more you already knew.

If “cognitive capacity” is the amount of information useful to some specific domain of problem solving one has in one’s head, then everyone on Earth has more or less the same cognitive capacity (excepting only people with some... (read more)

When programming, I track a mixed bag of things, top of which is readability: Will me-6-months-from-now be able to efficiently reconstruct the intention of this code, track down the inevitable bugs, etc.?

"Finally, the obvious question: what extra information do you mentally track, which is crucial to performing some task well?"

When I try to cook something complicated by recipe, I go over each line of the recipe and previsualize all the corresponding physical actions.
I previsualize the state, amount, location and the transitions for each  object. Objects = {pots, pans, ingredients, oil, condiments, package, piece of trash, volume of water, stove, task-completion times, hands, free seconds/minutes for cleaning during the cook, towel, tissue paper...}.
Th... (read more)

1Alex Vermillion8mo
I like the (n+1)3 bit, I took the time to try it out in my head too and it was a fun puzzle. I wonder if I can actually get better at visualization practicing problems of that difficulty level?
1David Gretzschel8mo
Of course. Till they become too easy, then you'd need something harder. Or you practice speed, I suppose.

One answer to the question for me:

While writing, something close to "how does this 'sound' in my head naturally, when read, in an aesthetic sense?"

I've thought for a while that "writing quality" largely boils down to whether the writer has an intuitively salient and accurate intuition about how the words they're writing come across when read. 

Now that I read this, I notice that I automatically do this when i'm in school, and that it's much more automatic and frequent in subjects I find easy (I wonder whether it's the tracking that makes it easy, or whether less effort frees up brain space to track?).

In history class, I always keep a mental map of when something happened, why it happened, and what resulted from it. I was very surprised when I found out none of my friends do anything similar, because it's such an obvious tool for seeing the bigger picture and remembering how things fit together f... (read more)

The example that immediately popped into my head was the difficulty I had training lawyers to conduct a jury trial.

You spend so much time building up a persuasive story to tell, but at trial most of your mental effort is elsewhere.

You have the jury, individual jurors, the judge, the witnesses, and opposing counsel, your story, the opposing story, and rules of evidence and procedure. All of these things are moving and changing at the same time.

It felt unteachable. I've said things like "it takes extreme observation".

In the main, isn’t this post mostly representing the idea of tacit knowledge, albeit under a different name?


This sounds a lot like mindfulness.  :-)

Cool post :)

[-][anonymous]9mo -1-6

This is essentially what machine learning, especially ANN, is.

I think you are downvoted because it is not clear what you are referring to. Maybe you can elaborate?
These are meta data that's derived from something in reality, whether you are working on a computer program, or working through some math. The act of working on them is similar to how a ML algorithm runs, it collects data (i.e. we read the equations, and we form a graph of variables of components that interact with each other, as even these mental models can be considered as input to the ANN) and form hidden layers in a ANN and turns into output (i.e. actions, writing down a variable, add some computation steps, etc.). These meta data mentioned by the author that we keep track of when doing these tasks are essentially the hidden layers of the ANN algorithm. Also everyone runs on different ANN. The meta data that the author mentioned are something that I personally have used as well, so there is some rational process to forming these useful metrics that help us complete the tasks.