When I'm talking to somebody, sometimes I lose the conversational thread. Or sometimes I feel like there is this thing I want to say that seems relevant now, but I just can't remember it.

Or maybe I'm trying to solve a particular problem. I throw myself again and again at the problem, but it just won't budge. And then after some long amount of time, possibly hours, I realize that the solution was extremely simple. And I just failed to execute the right kind of solution-finding algorithm that would have found this very simple solution quickly.

I would expect that people with more intelligence, perform better in these domains. They have probably an easier time remembering and retaining the right things. Well, that alone might be sufficient to explain a large chunk of what makes a more intelligent person able to perform better.

If you remember the right things quickly that are relevant in the moment, and if you can keep track of more things in your head at the same time without losing track of what these things were, then that might account for a large chunk of how an intelligent person is better at performing any particular thing.

The core point here is that I think everybody, even somebody who would be much smarter than me, can see various failure modes in their own cognition and realize that they might be just so fundamental that there is no direct way of changing them.

I'm pretty sure that at some level what sorts of things your brain spits out into your consciousness and how useful that information is in the given situation, is something that you can't fundamentally change. I expect this to be a hard-coded algorithm, and I expect there to be many such hard-coded cognitive processes that can't be changed (at least not in major ways).

The cognitive improvements that you can apply will be at a higher level. To me, it seems that is what much of the Sequences are about. You can understand that there is something like the sunk cost fallacy, and understanding what it is, allows you to train yourself to recognize when you fall pray to it (Though that is a separate step from understanding what it is that you actually need to do to get most of the benefit).

And the way you would do this is by for example using TAPs. In a sense, it seems that tabs are a way to install a very small hook into your brain in the programming sense. My current model of it is that you install a little watcher program that watches your sensory input streams and your internal model of the world. And then, when it detects a specific pattern, it triggers the execution of another algorithm. The interesting thing is that if you do this well, then all of this will become subconscious. So it's not that you can't change your subconscious algorithms. TABs are a way to install tiny new subconsciously executed algorithms into your brain.

So let me make give an example of an algorithm that I think is largely unchangeable. Let's do an experiment. Please follow the bolded instructions:

Imagine a car in your mind's eye.

Now, your brain will have brought probably some specific car to mind. Maybe it's a 911 Porsche, the Cybertruck, or another Tesla. The point is that based on you reading the word car your brain pulled a lot more information out of the depths of your brain, such as an image or a "feeling of carness". How is your mind doing this? You read the word car and your mind produces some qualia that is associated with the concept of car.

Now think of a car part.

Now what did you imagine? A steering wheel, a door, an engine, a cylinder, a wheel, a windshield, an antenna? Notice that there was one thing that came to mind first and then maybe another thing. But how did your brain generate that specific thing? Why did you for example think of a tire instead of a door? When I'm doing this experiment I do not have any introspective access to what is going on on the low level. How does this retrieval algorithm work that retrieves some information stored in the brain that's associated with being a car part?

The general version of this exercise goes as follows.

1) Imagine anything. 2) Then imagine something related to that thing. 3) Repeat from step 2.

The opaqueness of this algorithm makes me think that you can't change it, at least not in a major, direct way. You can of course change the input to the algorithm, you can learn new things and think in specific ways to see new connections between things, and then in the future, the semantic lookup algorithm will use that newly created knowledge and you might be better at thinking in particular contexts but it's not like fundamentally this retrieval algorithm was changed.

I'm very unsure how different this specific algorithm is for somebody who would be much smarter. Is there actually some important difference between their and my retrieval algorithm? Possibly not.

But I feel that probably whatever makes somebody smarter than me I expect this to be to largely depend on various "low-level hardware" configurations that can't be changed. I think there are people that are better at not losing the threat in complicated technical discussions compared to me. And here it's obvious that this is something that is not very much improvable because it doesn't really depend on anything that you have learned. It is not like if I acquire some piece of knowledge I would become better at not losing a conversational thread. Of course, there are some techniques you might apply at a higher cognitive level that would allow you to better keep track of what you are talking about. But I'm imagining here that there are two people who use the same kinds of techniques, which by default would be no technique at all to manage a conversation, And the only difference is the intelligence of these two people.

To be clear, here I mean something very specific with intelligence. The kind of intelligence that doesn't depend on what you know. I think there could be somebody in this sense, who is much smarter but behaves much more irrationally. They might do so because they lack various pieces of knowledge, including knowledge of algorithms that can be used to evaluate and generate new knowledge.

Monkeys and Bananas

I'm not quite sure why I have been writing this article. I just started doing it and then didn't stop. I think I have been thinking about this issue in the past because I have not met somebody who was obviously smarter than me until I was 25. And then it was sort of a shock that there are other people who could be smarter than me.

One piece of advice I can give for handling this realization well is to recognize that even if you are not playing with the best character stats that you have, you can still play the game. And you can play it well.

Just because there is somebody who is smarter than you, who works on some specific topic, doesn't mean that you shouldn't work on it. You should work on the thing where you can make the largest positive difference.

Just imagine a civilization of monkeys. It's extremely important to these monkeys that they breed lots of different varieties of bananas such that they will never get tired of eating bananas. An average monkey researcher in the field of banana breeding can create a new kind of banana in 10 years with an average taste score of 5.

Now, there are some monkey researchers who are not only faster at creating new types of bananas but they also on average create better-tasting bananas. Now imagine that you are literally the worst monkey researcher that exists. On average you will create less tasty bananas and will take longer than the slowest monkey researcher that already is working on creating new bananas.

Does this mean you shouldn't become a banana breeder? Well, if your civilization is in dire turmoil because your monkey brethren constantly get tired of eating bananas because there are just not enough varieties then it might be very obvious that this is the most impactful use of your time. You just need to make sure that your counterfactual impact is actually positive. Doing unethical experiments, trying to figure out how to turn monkeys into extremely tasty bananas might actually be worse than you doing nothing if you fundamentally care about all monkeys being happy.


New Comment
11 comments, sorted by Click to highlight new comments since: Today at 3:30 AM

Just because there is somebody who is smarter than you, who works on some specific topic, doesn't mean that you shouldn't work on it. You should work on the thing where you can make the largest positive difference. [...]

I think you address an important point. Especially for people who are attracted to LessWrong, there tends to be a lot identification with one's cognitive abilities. Realizing that there are other people who just are significantly more capable can be emotionally difficult.

For me, one important realization was that my original emotions around this kind of assumed a competition where not-winning was actually negative. When I grokked that a huge fraction of these super capable people are actually trying to do good things, this helped me shift towards mostly being glad if I encounter such people.

Also The Pont of Trade and Being the (Pareto) best in the World are good posts which emphasize that "contributing value" needs way fewer assumptions/abilities than one might think.

For me, one important realization was that my original emotions around this kind of assumed a competition where not-winning was actually negative.

Isn't that what we were trained for? I mean, school is a competition, capitalism is a competition; even many forms of cooperation include a hidden threat that you might be replaced by someone who is better to cooperate with.

(Uh, I am not really satisfied with what I wrote here, but I cannot improve it at the moment, at it feels to me that it's pointing towards something important, so I am leaving it here.)

You are right. Somehow I had failed to attribute this to culture. There clearly are lots of systems with a zero-sum competitive mentality.

Compared to the US, the German school and social system seems significantly less competitive to me (from what I can tell, living only in the latter). There still is a lot of competition, but my impression is that there are more niches which provide people with slack.

Despite a little tear on the ego, it has some advantages not to be the smartest in the room. Having read here on LW a couple of years ago and now bit by bit picking up some of the stuff that happened back then and in the meantime, the dominant feeling for me isn‘t envy of the capability of others, but relief that there are enough minds in the world entertaining thoughts that at least partially reflect my own shallow ones on topics that I deem important, but have no one to talk about.

Unless it involves some mathematics that I am not accustomed to, it feels as if I can follow and understand the majority of thoughts shared on this platform. That provides me with enough encouragment to delve deeper into the topics I am interested in, maybe even enough to work more actively on myself. In any case, it is more than I could hope for hanging out with the guys in the village, where most talk is about cars and gossip.

Yet, when asking myself what I possibly could contribute, the only things that I can come up with are sharing highly subjective experiences of my mind falling apart during manic psychoses, stitching myself together in the aftermath, and making a decent living job-wise and in terms of family life in spite of receiving very pessimistic prognoses for my future when I was in the process of rehabilitation.

I wish I could give more, but I had been spending much of my time playing video games and hanging out with people for senseless fun during my youth. Now, regardless of the degrees I‘ve received, I feel much less educated and capable than I could be.

No one is responsible for his genetic predispositions or his upbringing, but one is responsible for how one acts relating to the values one holds. I fell short on focused self-actualization, so maybe I‘m now the dumbest among the smart.

Now, what is the advantage that I have been talking about? I can probably learn from most of you in a mostly passive consuming style, whithout being obliged (by my morals) to contribute more than this mostly anecdotical comment.

Regarding the issue of hard-coded algorithms determining one‘s cognitive abilities: I‘m not sure about the malleability of deep cognitive processes in general, but I find myself to have undergone significant changes in meta-cognitive abilities as a result of recovery from psychosis. It has the character of some pre-conscious feeling about different trains of subconscious processes, involving both emotions and thoughts. Sometimes it extends to over-reflection that loses hold on some concrete issue and becomes circular, sometimes it helps to steer thinking and general behaviour into a desired direction. I can not remember having said impressions before falling ill and recovering. Thinking about these abilities gives me the impression that highly automated and reinforced sub-conscious routines might not be easily changeable to the more effective or efficient by themselves, but they might be integrated into some higher-order routines, changing their eventual output. These could be more and more automated themselves, thereby achieving an increase in fluid intelligence.

Thinking about these abilities gives me the impression that highly automated and reinforced sub-conscious routines might not be easily changeable to the more effective or efficient by themselves, but they might be integrated into some higher-order routines, changing their eventual output. These could be more and more automated themselves, thereby achieving an increase in fluid intelligence.

I definitely think that one can become better at understanding and steering the world, by improving their cognitive algorithms. I am just saying that there are some low-level ones that can't be changed. So improvement needs to happen at a higher level. This then puts some hard limits on how much smarter you can get, and how much effort it takes to gain one unit of smartness.

On the point that you are not sure what you could even do, I just want to say: Did you try? It seems like the most common failure case is to not even try. Another common failure mode to avoid would be to have the wrong expectations about how hard something is, and then give up, because it is so much harder than expected. The hardness is I guess some indication of intelligence. Some people will find doing math just much easier than others, just because they are smarter.

But if you are trying to do something very hard, it might make sense to consider how somebody smart would feel doing this. Would they also struggle and find it difficult, because the problem before you is just intrinsically difficult? I think if you don't think this thought explicitly, the default implicit assumption is always that what you are doing is easy for everybody else who does it. "Writing a book is a struggle", is what I heard a professional author once say. Authors are authors, not necessarily because writing is a cakewalk for them. I would think more often than not, it is because they have managed to tune their expectations to reality, such that they no longer feel bad for taking the actually required time for completing whatever task they are working on.

I found what you said about the pre-conscious feeling interesting. It made me slightly improve my model about how to avoid procrastination and depression. Normally I only procrastinate when I feel pretty down (at least the kind of "hardcore" procrastination where you do something that definitely is not productive at all, such as watching a movie or playing a videogame). The problem is that once I am in the state of feeling down it is hard to execute a strategy that actually will make me feel better. For example, doing regular sports and meditation seems to help enormously with changing my general mood for the better. But once I feel really down these things are really hard to do. So what you need to do is develop the habit of carefully paying attention to your experience and notice when you are on a downward spiral, before you have fallen so low that interventions become difficult to implement. And then of cause you still need to actually implement the intervention, but becoming sensitive to subtle emotional trends (which I am still not as good at as I would like) seems to be >25% of the battle.

I think I‘ve read something about the value of seemingly procrastinating behaviour a while ago. Right now, I have plenty of work to do, yet I am reading your reply and answering. Is this lost time or procrastination as commonly understood? I don‘t think so. It seems like meaningful exchange to me. And maybe updating my own self-model with the help of others is exactly what I really need right now to do better work later.

As for the feeling that something is going wrong with me: Increased awareness of the downward spiral does not easily translate into being able to stop or transform the process. It‘s part of my daily struggles.

You writing this message reflecting on if writing this message is procrastination is probably an indicator for that it is, at least not the worst form of procrastination, which would be about entering a mental state where you don't think and in some sense really don't want to think about if you are procrastinating because whatever procrastination you're doing makes you feel good or whatever and provides escapism and some parts of your brain wouldn't want to go away.

At least that's my experience.

The longer and harder you think about if something is procrastination and come to the conclusion that it isn't, the more evidence I would say this is that it isn't procrastination (at least especially if you're trying to correct for biases).

I'm pretty sure that at some level what sorts of things your brain spits out into your consciousness and how useful that information is in the given situation, is something that you can't fundamentally change. I expect this to be a hard-coded algorithm

Tune Your Cognitive Strategies proports to offer a technique which can improve that class of algorithm significantly.

Edit: Oh, no, you were meaning a different thing, and this probably goes into the inputs to the algorithm category?

I do think that tuning cognitive strategies (and practice in general) is relevant to improving the algorithm.

Practically hard-coded vs. Literally hard-coded

My introspective impression is less that there are "hard-coded algorithms" in the sense of hardware vs. software, but that it is mostly practically impossible to create major changes for humans.

Our access to unconscious decision-making is limited and there is a huge amount of decisions which one would need to focus on. I think this is a large reason why the realistic options for people are mostly i) only ever scratching the surface for a large number of directions for cognitive improvement, or ii) focussing really strongly on a narrow topic and becoming impressive in that topic alone[1].

Then, our motivational system is not really optimizing for this process and might well push in different directions. Our motivational system is part of the algorithm itself, which means that there is a boot strapping problem. People with unsuited motivations will never by motivated to change their way of thinking.

Why this matters

Probably we mostly agree on what this means for everyday decisions.

But with coming technology, some things might change.

  • Longevity/health might allow for more long-term improvement to be worthwhile (probably not enough by itself unless we reach astronomic lifespans)
  • technology might become more integrated into the brain. It does not seem impossible that "Your memory is unreliable. Let us use some tech and put it under a significant training regimen and make it reliable" will become possible at some point.
  • technologies like IVF could lead to "average people" having a higher starting point with regard to self-reflection and reliable cognition.

Also, this topic is relevant to AI takeoff. We do perceive that there is this in-principle possibility for significant improvement in our cognition, but notice that in practice current humans are not capable of pulling it off. This lets us imagine that beings who are somewhat beyond our cognitive abilities might hit this threshold and then execute the full cycle of reflective self-improvement.

  1. I think this is the pragmatic argument for thinking in separate magisteria ↩︎

I have a personal anecdote you might find interesting.

All through elementary school I seemed to be the smartest kid in every situation. Not surprising: my IQ scores came in around 145 which puts me at +3 standard deviations or in the 99.7th percentile, or put another way I should only expect to encounter 3/1000 people who are as smart or smarter than me. The entire population of the school was < 800 kids across all grades with about 120 in my grade, so not unexpected that I never met anyone as smart.

It wasn't until middle school, in 8th grade, that I met someone definitely smarter than me. To make matters worse, he was 2 years younger than me in 6th grade. But that was thankfully just in math (he was the only person to solidly outperform me on my school's Mathcounts team). So I was able to keep up the charade that I was the smartest kid in school, there was just the dweeb that was some sort of math savant. So much cope.

I was able to keep this up through high school. There'd be kids who were smarter than me in some narrow domain but I was able to hold onto the idea that I might well be the smartest of them all in general.

Then I met @Eliezer Yudkowsky and was humbled. I mean not at first. It took a few years of seeing him operate up close (can you be up close online?), but I eventually had to accept that I was out classed. And of course I should be: there's ~24 million people in the world who should have the same or higher IQ than me, and that's a helluva lot of people. I'm just a +3 scrub living in the +4's world.

Only, not quite. As I eventually learned, just being smarter, at least for humans, is not always correlated with better life outcomes. I saw people who I was smarter than also doing better than me, getting promotions a head of me, making more money, etc. Since I was young I'd put all my eggs in the IQ basket, and then sometimes in my mid-twenties found out that was a mistake for all but a tiny minority of people.

As you note, I had to learn how to make the most of my comparative advantage. And this has only become more important as I've aged because my fluid intelligence has definitely started to fall off despite trying my best to prevent it. Without the help of something like ChatGPT, I may well never write better code than I did in the past or come up with more clever proofs of mathematical propositions. So I've really leaned into finding other ways to excel, because there's always going to be someone younger, smarter, and faster than me. And at least for now, that's enough.

Indeed. I think about this type of thing often when I consider the concept of superhuman AI - when I spend hours stuck on a problem with a simple solution or forget something important, it’s not hard to imagine an algorithm much smarter than me that just doesn’t make those mistakes. I think the bar really isn’t that high for improving substantially on human cognition. Our brains have to operate under very strict energy constraints, but I can easily imagine a machine which performs a lot better than me by applying more costly but effective algorithms and using more precise memory. A pocket calculator is the trivial case, but I expect most of the other algorithms my brain uses can also be improved a lot given a larger energy and compute budget.