2318

LESSWRONG
LW

2317
Intellectual Progress (Individual-Level)Rationality
Frontpage

65

Trying to understand my own cognitive edge

by Wei Dai
3rd Nov 2025
5 min read
17

65

Intellectual Progress (Individual-Level)Rationality
Frontpage

65

Trying to understand my own cognitive edge
6TsviBT
6Darklight
4Morpheus
4interstice
5Wei Dai
5interstice
5Wei Dai
2interstice
3interstice
4Mateusz Bagiński
18Wei Dai
2Mateusz Bagiński
1foodforthought
1foodforthought
1foodforthought
1foodforthought
1marcohmuniz
New Comment
17 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:40 AM
[-]TsviBT8d60

A flip side of this analysis is that the detrimental effects of the aforementioned cognitive distortions might be much higher than is usually supposed or realized, perhaps sometimes causing multi-year/decade delays in important approaches and conclusions, and can't be overcome by others even with significant IQ advantages over me. This may be a crucial strategic consideration, e.g., implying that the effort to reduce x-risks by genetically enhancing human intelligence may be insufficient without other concomitant efforts to reduce such distortions.

This is non-researched speculation, but my guesses would be:

  • There are many cognitive dimensions that importantly affect performance in one or another important domain.
  • Most of these effects are substantively, though far from completely, fungible with more IQ. In other words, to make up a totally fictional example, you could have someone with IQ 130 and a lot of calm-spacious-attuned-nimble-empathy, who is able to follow along with another person as they struggle though conflicting mental elements, and to help that person untangle themselves by inserting relevant tricks, tools, perceptions, etc., while being sensitive to things that might be upsetting, etc. etc. On the other hand you could have someone with IQ 155, and only somewhat of this calm-spacious-attuned-nimble-empathy; and they basically perform as well as the first therapist at the overall task of helping a client come out of the therapy session with more thriving, on a better cognitive trajectory by their own lights, etc. Even thought Therapist 2 has somewhat less intuitive following-along with the client as does Therapist 1, Therapist 2 is able to make up for that by being able to generate more varied hypotheses quicker and "manually" updating quicker and thinking of better tools and communicating more clearly.
  • If you get a lot more people with really high IQs, you also get a bunch more people who are [high on other important cognitive traits, and also high IQ]. (How relevant this argument is, depends on what the numbers look like--how quick is the uptake of, say, reprogenetics technology, how high is the threshold of IQ and of other cognitive dimensions for a given performance, etc.)

Anyway, I definitely would want to genomically vector for these other traits, e.g. wisdom, but it's harder. I do think that argues in favor of working on psychometrics for personality traits as a higher marginal priority than IQ; I think that argument goes through pretty strongly. (Though some people have expressed special worry about personality traits--some of which, e.g. obedience/agreeability, might be targets for oppressive regimes. IDK what to think of that; it feels "far" / unlikely / outlandish, but I don't want to be dismissive and haven't thought about it enough.) But, I think

  • The hardest part of any of this is the biotech, not the psychometrics. A crash course to get strong reprogenetics would be really hard and expensive and might not work; a crash course on psychometrics would probably somewhat work, well enough to get significant chunks of benefit. (But, not confident of any of that.)
  • Even if you can just vector for IQ, that's still very positive in EV (though my belief here has substantial "instability", i.e. EV>0 has cruxes with high volatility on their probabilities, or something).
Reply
[-]Darklight8d60

I've read a lot of your posts in the past and find you to be reliably insightful. As such, I find it really interesting that with such an IQ in the 99th percentile (or higher), you still initially thought you weren't good enough to do important AI safety work. While I haven't had my IQ properly tested, I did take the LSAT and got a 160 (80th percentile), which is probably around an IQ of merely 120ish. I remember reading a long time ago that the average self-reported IQ of Less Wrongers was 137, which, combined with the extremely rigorous posting style of most people on here, was quite intellectually intimidating (and still is to an extent).

This makes me wonder if there's much point in my own efforts to push the needle on AI safety. I've interviewed in the past with orgs and not gotten in, and occasionally done some simple experiments with local LLMs and activation vectors (I used to work in industry with word vectors so I know a bunch about that space), but actually getting any sort of position or publishable result seems to be very hard. I've had the thought a lot of times that the resources that could go to me would be better off given to a counterfactual more talented researcher/engineer, as it seems like AI safety is more funding constrained than talent constrained.

Reply
[-]Morpheus8d*40

A flip side of this analysis is that the detrimental effects of the aforementioned cognitive distortions might be much higher than is usually supposed or realized, perhaps sometimes causing multi-year/decade delays in important approaches and conclusions, and can't be overcome by others even with significant IQ advantages over me. This may be a crucial strategic consideration, e.g., implying that the effort to reduce x-risks by genetically enhancing human intelligence may be insufficient without other concomitant efforts to reduce such distortions.

Since I have been working on germline engineering, I have been thinking about the same thing. My intuition is that if I could magically just increase everyone's IQ by 5 points that would result in a marginally saner world. But creating a few babies with ~160+ IQs doesn't seem obviously beneficial. Even if 3/4 get coordination problems etc. what if the fourth one decides to work at OpenAI or Anthropic on capabilities, because working on capabilities is just so much more exciting. If GWAS for personality were working better I'd be more optimistic about selecting for something like capacity to gather wisdom? With editing, you could also go for cognitively enhanced clones of people we consider wise (Main bottleneck with this option would be PR). Problem is that I am going to disagree with people whom we consider wise. Although perhaps we can all agree, we would not like to clone the CEOs of the AGI labs. Perhaps really good education would also help. Not an expert on education.

Reply1
[-]interstice9d40

Just spitballing here, but one thing that strikes me about a lot of your ideas is that they seem correct but impractical. So for example, yes it seems to be the case that a rational civilization would implement a long pause on AI, in a sense that's even "obvious", but in practice, it's going to be very hard to convince people to do that. Or yes, in theory it might be optimal to calculate the effect of all your decisions on all possible Turing machines according to your mathematical intuition modules, but in practice that's going to be very difficult to implement. Or yes, in theory we can see that money/the state are merely an arbitrary fixed-point in what things people have agreed to consider valuable, but it's gonna be hard to get people to adopt a new such fixed-point.

So the question arises, why are there few people with a similar bent towards such topics? Well, because such speculations are not in practice rewarded, because they are impractical! Of course, you can sometimes get large rewards from being right about one of these, e.g. bitcoin. But it seems like you captured a lot less of the value from that than you could've, such that the amount of resources controlled by people with your cognitive style remains small. Perhaps because getting the rewards from one of those large sparse payoffs still depends on a lot of practical details and luck.

Yet another way of formulating this idea might be that the theoretically optimal inference algorithm is a simplicity prior, but in practice that's impossible to implement, so people instead use approximations. In reality most problems we encounter have a lot of hard-to-compress detail, but there is a correspondingly large amount of "data" available(learned through other people/culture perhaps) so the optimal approximation ends up being something like interpolation from a large database of examples. But that ends up performing poorly on problems where there the amount of data is relatively sparse(but for which there may be large payoffs).

So this then raises the question of how cognitive styles that depend on large but sparse rewards can defend/justify themselves to styles that benefit from many small consistent rewards.

Reply
[-]Wei Dai9d50

In terms of financial rewards, those also way exceeded my early expectations. Independent of Bitcoin, I was able to retire (stop working a paying job) comfortably in my late 20s from a software product I developed earlier, and during the last 5 years I multiplied my NW by 8x, by putting 10% of NW into S&P500 puts shortly before the COVID market crash, which doubled my NW, and then the rest from trading full time for 2-3 years (starting as a complete beginner who had only invested in index funds before, and putting my intellectual interests almost completely aside for that time) during the post-COVID market craziness, and from the general subsequent market rise.

So this then raises the question of how cognitive styles that depend on large but sparse rewards can defend/justify themselves to styles that benefit from many small consistent rewards.

I think the results (both intellectual and financial) speak for themselves?

Reply
[-]interstice9d51

I think the results (both intellectual and financial) speak for themselves?

I mean, it still seems to be the case that people with a less philosophical style control vastly more resources/influence, and are currently using them to take what are from your perspective insanely reckless gambles on AGI, no? I'm saying from an ecological perspective this is due to those cognitive styles being more useful/selected-for[well, or maybe they're just "easier" to come up with and not strongly selected against] on more common "mundane" problems where less philosophical reflection is needed(abstractly, because those problems have more relevant "training data" available)

Reply
[-]Wei Dai9d50

I think in terms of wealth, it's just because there's a lot more of them to start with (so you end up with a much larger number of outliers with high wealth who could invest into AGI), but on a per person basis, it seems hard to argue that my cognitive style isn't financially very rewarding.

But in terms of gaining influence, it does seem that my style is terrible (i.e., being largely ignored for up to a decade or two on any given topic). Seems like an important point to bring up and consider, so thanks.

Reply
[-]interstice8d20

I think in terms of wealth, it's just because there's a lot more of them to start with

Ah yes, but why is that the case in the first place? Surely it's due to the evolutionary processes that make some cognitive styles more widespread than others. But yeah I think it's also plausible that there is net selection pressure for this and there just hasn't been enough time(probably the selection processes are changing a lot due to technological progress as well...)

Reply
[-]interstice9d31

Another thing which I wasn't sure how to fit in with the above. I framed the neglect of your "philosophizing" cognitive style as being an error on the world's part, but in some cases I think this style might ultimately be worse at getting things done, even on its own terms.

Like with UDT or metaphilosophy my reaction is "yes we have now reached a logical terminus of the philosophizing process, it's not clear how to make further progress, so we should go back and engage with the details of the world in the hope that some of them illuminate our philosophical questions". As a historical example, consider that probability theory and computability theory arose from practical engagement with games of chance and calculations, but they seem to be pretty relevant to philosophical questions(well, to certain schools of thought anyway). More progress was perhaps made in this way than could've been made by people just trying to do philosophy on its own.

Reply
[-]Mateusz Bagiński9d40

the main thing that appears to have happened is that I had exceptional intuitions about what problems/fields/approaches were important and promising

I'd like to double-click on your exceptional intuitions, though I don't know what questions would be most revealing if answered. Maybe: could you elaborate on what you saw that others didn't see and that made you propose b-money, UDT, the need for an AI pause/slowdown, etc?

E.g., what's your guess re what Eliezer was missing (in his intuitions?) in that he came up with TDT but not UDT? Follow-up: Do you remember what the trace was that led you from TDT to UDT? (If you don't, what's your best guess what it was?)

Reply
[-]Wei Dai9d180

b-money: I guess most people working on crypto-based payments were trying to integrate with the traditional banking system, and didn't have the insight/intuition that money is just a way for everyone to "keep tabs" of how much society as a whole owes to each person (e.g. for previous services rendered), and therefore a new form of money (i.e. not fiat or commodity) could be created and implemented as a public/distributed database or ledger.

UDT: I initially became interested in decision theory for a very different reason than Eliezer. I was trying to solve anthropic reasoning, and tried a lot of different ideas but couldn't find one that was satisfactory. Eventually I decided to look into decision theory (as the "source" of probability theory) and had the insight/intuition that if the decision theory didn't do any updating then we could sidestep the entire problem of anthropic reasoning. Hal Finney was the only one to seriously try to understand this idea, but couldn't or didn't appreciate it (in fairness my proto-UDT was way more complicated than EDT, CDT, or the later UDT, because I noticed that it would cooperate with its twin in one-shot PD, and added complications to make it defect instead, not questioning the conventional wisdom that that's what's rational).

Eventually I got the idea/hint from Eliezer that it can be rational to cooperate in one-shot PD, and also realized my old idea seem to fit well with what Nesov was discussing (counterfactual mugging), and this caused me to search for a formulation that was simple/elegant and could solve all of the problems known at the time, which became known as UDT.

I think Eliezer was also interested in anthropic reasoning, so I think he was missing my move to look into decision theory for inspiration/understanding and then making the radical call that maybe anthropic reasoning is unsolvable as posed, and should be side-stepped via a change to decision theory.

need for an AI pause/slowdown: I think I found Eliezer convincing when he started talking about the difficulty of making AI Friendly and why others likely wouldn't try hard enough to succeed, and just found it implausible that he could with a small team win a race against the entire world who was spending much less effort/resources on trying to make their AIs Friendly. Plus I had my own worries early on that we needed to either solve all the important philosophical problems before building AGI/ASI, or figure out how to make sure the AI itself is philosophically competent, and both are unlikely to happen without a pause/slowdown (partly because nobody else seemed to share this concern or talked about it).

Reply11
[-]Mateusz Bagiński9d20

Thanks!

The entire thing seems to have a very https://www.lesswrong.com/posts/bhLxWTkRc8GXunFcB/what-are-you-tracking-in-your-head vibes, though that's admittedly not very specific.

What stands out to me in the b-money case is that you kept tabs on "what the thing is for"/"the actual function of the thing"/"what role it is serving in the economy", which helped you figure out how to make a significant improvement.

Very speculatively, maybe something similar was going on in the UDT case? If the ideal platonic theory of decision-making "should" tell you and your alt-timeline-selves how to act in a way that coheres (~adds up to something coherent?) across the multiverse or whatever, then it's possible that having anthropics as the initial motivation helped.

Reply
[-]foodforthought3d10

It seems impractical to recommend that someone spend a few years in cryptography

That's too literal. How about: always try to play chess/tennis with someone better than you, when you can. Get your early training -- when it's your full time job to study -- in the field you struggle to keep up in, not the one you clearly dominate in. Be the small fish in a big pond. You learn most from people who are better than you. You will learn epistemic humility and self-skepticism. You hopefully learn not to stake your ego on your genius. you learn how to make yourself useful to people you want to learn from.

(But don't pick the field you can't keep up in at all; you won't learn anything when you are utterly lost; you'll likely be demoralized by the attempt, even if everyone is kind; and you risk being a true burden, toward whom kindness won't always be extended).

Reply
[-]foodforthought3d10

Are there others who could make a similar claim of having exceptionally good and hard to explain intuitions

Yes. PM me.

Reply
[-]foodforthought3d10

Not only is it rare but there seems to be a surprisingly large gap between my intuitions and the next closest person's.

This is a natural consequence of having many interests and switching fields many times; you bring a unique set of factual knowledge, concepts, heuristics, theoretical frameworks or insights to the problem at hand. You pay the cost of not likely being the most-expert or most-competent in most or all the fields you care about, but you reap the benefit of having uncommon insights, which tend to be orthogonal to the rest of the field (and perhaps for this reason incomprehensible to them).

Weak analogy: If reality could be mapped to one orthogonal basis set, it's as if each specialization only has a few basis functions; the good fields have at least one basis function that explains a large fraction of the power in at least the signal set [dimension of reality] they study.  But other parts of reality that don't happen to project onto their set are entirely invisible to them. Interdisciplinary field-switchers get to accumulate basis functions as they go; so they always have some at their disposal that are orthogonal to the set in use within the current discipline. The more disparate two disciplines, the less overlap in their endogenous basis sets (but perhaps that is a tautological statement, a definition of 'disparate').

Reply
[-]foodforthought3d10

It seems hard to explain using anything we know from cognitive science. Standard explanations for good intuitions include that they're distilled from extensive prior experience or reasoning, but I moved from field to field and as a result was often a newcomer.

This could be exactly why you were able to have exceptionally good intuitions, see https://www.lesswrong.com/posts/Afdohjyt6gESu4ANf/most-people-start-with-the-same-few-bad-ideas?commentId=JBHcLYgk77c7vxeGk

Reply
[-]marcohmuniz9d10

I think outside of grit and intelligence you also display high openness to experience which has been shown to increase or at least highly correlate with research productivity.

Reply
Moderation Log
More from Wei Dai
View more
Curated and popular this week
17Comments

I applaud Eliezer for trying to make himself redundant, and think it's something every intellectually successful person should spend some time and effort on. I've been trying to understand my own "edge" or "moat", or cognitive traits that are responsible for whatever success I've had, in the hope of finding a way to reproduce it in others, but I'm having trouble understanding a part of it, and try to describe my puzzle here. For context, here's an earlier EAF comment explaining my history/background and what I do understand about how my cognition differs from others.[1]

More Background

In terms of raw intelligence, I think I'm smart but not world-class. My SAT was only 1440, 99th percentile at the time, or equivalent to about 135 IQ. (Intuitively this may be an underestimate and I'm probably closer to 99.9th percentile in IQ.) I remember struggling to learn the GNFS factoring algorithm, and then meeting another intern at a conference who had not only mastered it in the same 3 months that I had, but was presenting an improvement on the SOTA. (It generally seemed like cryptography research was full of people much smarter than myself.) I also considered myself lazy or not particularly hardworking compared to many of my peers, so didn't have especially high expectations for myself.

(An illustration of this is that when I, as a freshman CS major, became worried about eventual AI takeover after reading Vernor Vinge's A Fire Upon the Deep, I thought I wasn't smart or conscientious enough to contribute to a core field like AI safety, i.e., that there would eventually be plenty of people much smarter and harder working than me contributing to it. As a result I didn't even take any AI courses, but instead decided to focus my education and career on applied cryptography, as a way to contribute to reducing AI x-risk from the periphery, by increasing overall network security.)

The Puzzle

It seems safe to say that I exceeded[2] my own expectations, and looking back, the main thing that appears to have happened is that I had exceptional intuitions about what problems/fields/approaches were important and promising, and then used my high but not world-class intelligence to pick off some low hanging fruits or stake out some positions destined to become popular later. Others ignored them for a long time, even after I published my ideas. In several cases they were ignored for so long that I had given up hope of getting significant validation or positive feedback for them, until they were eventually rediscovered and/or made popular by others.

The questions that currently puzzle me:

  1. Do I (or did I) have a real cognitive ability, or is there a non-cognitive explanation, or just luck? (One hypothesis that's hard to rule out but not very productive is that I'm in a game or simulation.)
  2. If I do, how does it work and why is it so rare? It seems hard to explain using anything we know from cognitive science. Standard explanations for good intuitions include that they're distilled from extensive prior experience or reasoning, but I moved from field to field and as a result was often a newcomer.
  3. Not only is it rare but there seems to be a surprisingly large gap between my intuitions and the next closest person's. For example I've been talking about how philosophical problems are likely to be a bottleneck for AI alignment/x-safety for more than 2 decades, while others until very recently have either ignored this line of thought, or think they have some ready solution for metaphilosophy or AI philosophical competence (that they either don't write down in enough detail for me to evaluate, or just don't seem very good to me). Similarly, with b-money, my pre-LW proto-UDT ideas, and my early position that stopping AI development and increasing human intelligence should be plan A, I was intellectually almost completely alone for many years.[3]
  4. Are there others who could make a similar claim of having exceptionally good and hard to explain intuitions, but have/had different interests from me, so I've never heard of them?

A Plausible Answer?

It occurs to me as I'm writing this, that maybe what I have (or had) is not exceptionally good intuitions, but good judgment that comes from a relatively high baseline reasoning ability and knowledge base, buffed by a lack of the usual cognitive distortions, specifically overconfidence (which leads to a tendency to latch onto the first seemingly good idea that one thinks of, instead of being self-skeptical and trying hard to find flaws in one's own ideas) and institutional pressures/incentives that result from one's employment. 

My self-skepticism probably came from the early career in cryptography, where often the only way to minimize risk of public humiliation is to scrupulously examine one's own proposals for potential flaws, and overconfidence is quickly punished. Security proofs are often not possible or themselves potentially flawed, e.g. due to use of wrong assumptions or models. Also, the flaws are often extremely subtle and difficult to find, but hard to deny once pointed out, further incentivizing self-skepticism and scrutiny.

My laziness may have paradoxically helped, by causing me to avoid joining the usual institutions that someone with my interests might have joined (e.g. academia and other research institutes) to instead pursue a "pressure-free" life of thinking about whatever I want to think about, saying whatever I want to say.

(This life probably has its own cognitive distortions, e.g., related to status games that people play in online discussion forums, but perhaps they're different enough from the usual cognitive distortions that I was able to see a bunch of blind spots that other people couldn't see.)

Re-reading my 2-year-old EAF comment (copied as footnote [1] below), I had already mentioned my self-skepticism and financial/organizational independence as factors for my intellectual success, but apparently still felt like there was a puzzle to be explained. Perhaps the main realization/insight of this post is that the effect size from a combination of these 2 factors could be large enough to explain/constitute all or most of my "edge", and there may not be a further mystery of "exceptionally good intuitions" that needs to be explained.

I'll probably keep thinking about this topic, and welcome any thoughts or perspectives from others. It's also not quite clear what practical advice to draw from this, assuming my "plausible answer" is true. It seems impractical to recommend that someone spend a few years in cryptography, but I'm not sure if anything less onerous than that would have a similar effect, nor can I say with any confidence that even such experience will produce the same kind of general and deep-seated self-skepticism that it apparently did in me. Being financially/organizationally independent also seems impractical or too costly for most people to seriously pursue. I would welcome any suggestions on this front (of practical advice) as well.

One implication that occurs to me is that if the advantages of these cognitive traits accumulate multiplicatively (as they seem to), then the cost of gaining the last piece of the puzzle might be well worth paying for someone who already has the others. E.g., if someone already has a >99th percentile IQ, wide-ranging intellectual background and interests, and one of self-skepticism and independence, then the marginal value of gaining the other trait might be very high and hence worth its cost.

A flip side of this analysis is that the detrimental effects of the aforementioned cognitive distortions might be much higher than is usually supposed or realized, perhaps sometimes causing multi-year/decade delays in important approaches and conclusions, and can't be overcome by others even with significant IQ advantages over me. This may be a crucial strategic consideration, e.g., implying that the effort to reduce x-risks by genetically enhancing human intelligence may be insufficient without other concomitant efforts to reduce such distortions.

  1. ^

    Copying here for completeness/archival purposes:

    I thought about this and wrote down some life events/decisions that probably contributed to becoming who I am today.

    • Immigrating to the US at age 10 knowing no English. Social skills deteriorated while learning language, which along with lack of cultural knowledge made it hard to make friends during teenage and college years, which gave me a lot of free time that I filled by reading fiction and non-fiction, programming, and developing intellectual interests.
    • Was heavily indoctrinated with Communist propaganda while in China, but leaving meant I then had no viable moral/philosophical/political foundations. Parents were too busy building careers as new immigrants and didn't try to teach me values/traditions. So I had a lot of questions that I didn't have ready answers to, which perhaps contributed to my intense interest in philosophy (ETA: and economics and game theory).
    • Had an initial career in cryptography, but found it a struggle to compete with other researchers on purely math/technical skills. Realized that my comparative advantage was in more conceptual work. Crypto also taught me to be skeptical of my own and other people's ideas.
    • Had a bad initial experience with academic research (received nonsensical peer review when submitting a paper to a conference) so avoided going that route. Tried various ways to become financially independent, and managed to "retire" in my late 20s to do independent research as a hobby.

    A lot of these can't really be imitated by others (e.g., I can't recommend people avoid making friends in order to have more free time for intellectual interests). But here are some practical advice I can think of:

    1. Try to rethink what your comparative advantage really is.
    2. I think humanity really needs to make faster philosophical progress, so try your hand at that even if you think of yourself as more of a technical person. Same may be true for solving social/coordination problems. (But see next item.)
    3. Somehow develop a healthy dose of self-skepticism so that you don't end up wasting people's time and attention arguing for ideas that aren't actually very good.
    4. It may be worth keeping an eye out for opportunities to "get rich quick" so you can do self-supported independent research. (Which allows you to research topics that don't have legible justifications or are otherwise hard to get funding for, and pivot quickly as the landscape and your comparative advantage both change over time.)

    ETA: Oh, here's a recent LW post where I talked about how I arrived at my current set of research interests, which may also be of interest to you.

  2. ^

    Copying my main accomplishments here:

    • Created the first general purpose open source cryptography programming library (Crypto++, 1995), motivated by AI risk and what's now called "defensive acceleration".
    • Published one of the first descriptions of a cryptocurrency based on a distributed public ledger (b-money, 1998), predating Bitcoin.
    • Proposed UDT, combining the ideas of updatelessness, policy selection, and evaluating consequences using logical conditionals.
    • First to argue for pausing AI development based on the technical difficulty of ensuring AI x-safety (SL4 2004, LW 2011).
    • Identified current and future philosophical difficulties as core AI x-safety bottlenecks, potentially insurmountable by human researchers, and advocated for research into metaphilosophy and AI philosophical competence as possible solutions.
  3. ^

    With the notable exceptions of Nick Szabo who invented his BitGold at nearly the same time as b-money, Cypherpunks who thought b-money was interesting/promising but didn't spend much effort developing it further, and Hal Finney who perhaps paid the most attention to my ideas pre-LW, including by developing RPOW, trying to understand my early decision theory ideas, and writing up UDASSA in a publicly presentable form.