Internalizing Internal Double Crux


In sciences such as psychology and sociology, internalization involves the integration of attitudes, values, standards and the opinions of others into one's own identity or sense of self.

Internal Double Crux is one of the most important skills I've ever learned. In the last two weeks, I've solved some serious, long-standing problems with IDC (permanently, as far as I can tell, and often in less than 5 minutes), a small sample of which includes:

  • Belief that I have intrinsically less worth than others
  • Belief that others are intrinsically less likely to want to talk to me
  • Belief that attendance at events I host is directly tied to my worth
  • Disproportionately negative reaction to being stood up
  • Long-standing phobia of bees and flies

I feel great, and I love it. Actually, most of the time I don't feel amazingly confident - I just feel not bad in lots of situations. Apparently this level of success with IDC across such a wide range of problems is unusual. Some advice, and then an example.

  • The emotional texture of the dialogue is of paramount importance. There should be a warm feeling between the two sides, as if they were two best friends who are upset with each other, but also secretly appreciate each other and want to make things right.
    • Each response should start with a sincere and emotional validation of some aspect of the other side's concern. In my experience, this feels like emotional ping pong.
    • For me, resolution of the issue is accompanied by a warm feeling that rises to my throat in a bubble-ish way. My heart also feels full. This is similar to (but distinct from) the 'aww' feeling you may experience when you see cute animals.
  • Focusing is an important (and probably necessary) sub-skill.
  • Don't interrupt or otherwise obstruct one of your voices because it's "stupid" or "talked long enough" - be respectful. The outcome should not feel pre-ordained - you should be having two of your sub-agents / identities sharing their emotional and mental models to come to a fixed point of harmonious agreement.
  • Some beliefs aren't explicitly advocated by any part of you, and are instead propped up by certain memories. You can use Focusing to hone in on the memories, and then employ IDC to resolve your ongoing reaction to it.
  • Most importantly, the arguments being made should be emotionally salient and not just detached, "empty" words. In my experience, if I'm totally "in my head", any modification of my System 1 feelings is impossible.

Note: this entire exchange took place internally over the course of 2 minutes, via a 50-50 mix of words and emotions. Unpacking it took significantly longer.

I may write more of these if this is helpful for people.


If I don't get this CHAI internship, I'm going to feel terrible, because that means I don't have much promise as an AI safety researcher.

Realist: Not getting the internship is moderate Bayesian evidence that you're miscalibrated on your potential. Someone promising enough to eventually become a MIRI researcher would be able to snag this, no problem. I feel worried that we're poorly calibrated and setting ourselves up for disappointment when we fall short.

Fire: I agree that not getting the internship would be fairly direct Bayesian evidence that there are others who are more promising right now. I think, however, that you're missing a few key points here:

  • We've made important connections at CHAI / MIRI.
  • Your main point is a total buckets error. There is no ontologically-basic and immutable "promising-individual" property. Granted, there are biological and environmental factors outside our control here, but I think we score high enough on these metrics to be able to succeed through effort, passion, and increased mastery of instrumental rationality.
  • We've been studying AI safety for just a few months (in our free time, no less); most of the studying has been dedicated towards building up foundational skills (and not reviewing the literature itself). The applicants who are chosen may have a year or more of familiarity with the literature / relevant math on us (or perhaps not), and this should be included in the model.
  • One of the main sticking points raised during my final interview has since been fixed, but I couldn't signal that afterwards without seeming overbearing.

I guess the main thrust here is that although that would be a data point against our being able to have a tectonic impact right now, we simply don't have enough evidence to responsibly generalize. I'm worried that you're overly pessimistic, and it's pulling down our chances of actually being able to do something.

Realist: I definitely hear you that we've made lots of great progress, but is it enough? I'm so nervous about timelines, and the universe isn't magically calibrated to what we can do now.* We either succeed, or we don't - and pay the price. Do we really have time to tolerate almost being extraordinary? How is that going to do the impossible? I'm scared.

Fire: Yup. I'm definitely scared too (in a sense), but also excited. This is a great chance to learn, grow, have fun, and work with people we really admire and appreciate! Let's detach the grim-o-meter, since that strategy seems to strictly dominate being worried and insecure about whether we're doing enough.

Realist: I agree that detaching the grim-o-meter is the right thing to do, but... it makes me feel guilty.* I guess there's a part of me that believes that feeling bad when things could go really wrong is important.

Concern: Hey, that's me! Yeah, I'm really worried that if we detach that grim-o-meter, we'll become callous and flippant and carefree. I don't know if that's a reasonable concern, but the prospect makes me feel really queasy. Shouldn't we be really worried?

Realist: Actually, I don't know. Fire made a good point - the world will probably end up slightly better if we don't care about the grim-o-meter...

Fire: Hell yeah it will! What are we optimizing for here - an arbitrary deontological rule about feeling bad, or the actual world-state? Furthermore, we aren't discarding morality - we're discarding the idea that we should worry when the world is in a probably-precarious position. We'll still fight just as hard.

* Notice how related cruxes can (and should) be resolved in the same session. Resolution cannot happen if any part of you isn't fully on board with whatever agreement you've come to - this feels like a small emptiness in the pit of my stomach, in my experience.