LESSWRONG
LW

97
Richard_Ngo
19966Ω285516911210
Message
Dialogue
Subscribe

Formerly alignment and governance researcher at DeepMind and OpenAI. Now independent.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Twitter threads
Understanding systematization
Stories
Meta-rationality
Replacing fear
Shaping safer goals
AGI safety from first principles
6Richard Ngo's Shortform
Ω
5y
Ω
457
"Pessimization" is Just Ordinary Failure
Richard_Ngo15h62

Thanks for engaging! There's a lot here I agree with—in particular, the concept of pessimization does seem like a dangerous one which could be used to demoralize people. I also think psychoanalyzing me is fair game here, and that it would be a big strike against the concept if I were using it badly.

I'm trying to figure out if there's some underlying crux here, and the part that gets closest to it is maybe:

I think Richard makes an important error when he complains about existing activist-ish groups: he compares these groups to an imaginary version of the activist group which doesn't make any mistakes. Richard seems to see all mistakes made by activist groups as unforced and indicative of deep problems or malice.

I don't know how you feel about the concept of Moloch, but I think you could probably have written a pretty similar essay about that concept. In each individual case you could characterize a coordination failure as just an "ordinary failure", rather than a manifestation of the larger pattern that constitutes Moloch. And indeed your paragraph above is strikingly similar to my own critique of the concept of Moloch, which basically argues that Scott is comparing existing coordination failures to an imaginary world which has perfect coordination. I've also made similar critiques of Eliezer's concept of "civilizational inadequacy" as measuring down from perfection.

I think that the synthesis here is that neither pessimization nor Moloch nor "civilizational inadequacy" should be treated as sufficiently load-bearing that they should tell you what to do directly. In some sense all of these create awayness motivations: don't pessimize, don't be inadequate, don't let Moloch win. But as Malcolm Ocean points out, awayness motivations are very bad for steering. If your guiding principle is not to be inadequate, then you will probably not dream very big. If your guiding principle is not to pessimize, then people will probably just throw accusations of pessimization at each other until everything collapses into a big mess.

That's why I ended the post by talking about virtue ethics, and how it can be construed as a technology for avoiding pessimization. I want to end up in a place where people almost never say to each other "stop pessimizing", they instead say "be virtuous". But in order to argue for virtues as the solution to pessimization/the way to build the "imaginary version" of groups which don't make such unforced errors, I need to first point at one of the big problems they're trying to solve. It's also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.

I'm not 100% sure that this is the right synthesis, and will need to muse on it more, but I appreciate your push to get this clearer in my head (and on LessWrong).

Lastly, at risk of turning this political, the one thing I'll say about the "support Hamas" stuff is that there's a spectrum of what counts as "support", from literally signing up to fight for them to cheering them on to dogwhistling in support of them to just pushing for some of the same goals that they do to failing to condemn them. My contention is that there are important ways in which Hamas' lack of alignment with western values leads to more western support for them—e.g. the wave of pro-Palestine rallies immediately after they killed many civilians—which is what makes this an example of pessimization. Of course this is a dangerous kind of accusation because there's a lot of wiggle room in exactly what we mean by "lack of alignment", and distinctions between supporting Hamas itself vs supporting associated causes. I personally still think the effect is stark enough that my core point was correct, but I should have phrased it more carefully. (Note: I edited this paragraph a few mins after writing it, because the original version wasn't very thoughtful.)

Reply1
Generalized Coming Out Of The Closet
Richard_Ngo1d20

Can you give some examples of people with vibrant will-to-Goodness?

My guess is that the people who are unusually disembodied that you're thinking of probably suppress a kind of contempt and/or anger at other people who don't have so much will-to-Goodness.

Reply
Generalized Coming Out Of The Closet
Richard_Ngo2d222

I think that properly understanding the psychology of BDSM might provide the key to understanding psychology in general (in ways that are pretty continuous with the insights of early pioneers of psychology, e.g. Freud and particularly Jung).

My current model is:

  • The process of learning to be "good" typically involves renouncing and suppressing your "antisocial" desires, some of which are biologically ingrained (e.g. many aspects of male aggression) and some of which are learned idiosyncratically (e.g. having a traumatic childhood which teaches you that the world is zero-sum and you can only gain by hurting others). It also involves renouncing and suppressing parts of yourself which are "pathetic" or "weak" (e.g. the desire to not have to make any choices, the belief that you are bad and unworthy of existing).
  • These desires/beliefs aren't removed from your psyche (since internal subagents have strong survival instincts, making it difficult to fully destroy them) but rather coagulate into a "shadow": a coalition of drives and desires which mostly remains hidden from your conscious thinking, but still influences your behavior in various ways. The influence of your shadow on your behavior is typically hard for you to detect yourself, but often easy for (emotionally intelligent) others to detect in you.
  • People who have a very strong "will-to-Goodness" don't necessarily have very strong/extreme shadows, but often do, because they created the very strong will-to-Goodness by strongly suppressing their antisocial desires, which then strongly polarized those desires.
  • Many types of BDSM are a fairly straightforward manifestation of the desires in your shadow. Participating in BDSM can be good for one's psyche in the sense that it represents a partial reconciliation with one's shadow, reducing internal conflict. I.e. rather than having a shadow that's fully repressed, you can have a "bargain" between your ego and your shadow that's something like "the ego is (mostly) in charge almost all the time, while the shadow is (mostly) in charge during kinky sex". It feels really somatically nice for parts of your psyche which are almost always repressed and shamed to be allowed to act for once.
  • However, BDSM can also be bad for one's psyche in the sense that positive reinforcement during BDSM causes your shadow to grow, thereby increasing internal conflict longer-term. Also, doing BDSM with others can cause their shadow to grow too. "Healthy" BDSM probably looks more like an outlet which gradually helps you to accept and integrate your shadow then move on, rather than a lifestyle or a part of your long-term identity. My guess is that BDSM communities end up instantiating similar "crab in a bucket" dynamics as incel communities—i.e. holding people back from developing healthier psychologies.
  • Young children are rightly horrified by BDSM when they stumble upon it, because it's an indication that there's something twisted/perverse going on in the world. However, I suspect that almost all adults who feel horrified by BDSM are in part reacting to their own shadow. My guess is that the few people who have actually integrated their shadows in a healthy way are neither very interested in nor very horrified by BDSM, but rather mostly sad about it (like they're sad about suffering more generally). When I say that they've "integrated" their shadows, I mean that their BDSM-like desires are cooperating strongly enough with their other desires that they're a little bit present most of the time, rather than driving them to create simulacra of highly transgressive behavior. This might sound scary, but I expect that fully experiencing the ways in which we all have power over each other in normal life provides enough fodder to satisfy the BDSM-like desires in almost all of us. (For example, if you really allowed yourself to internalize how much power being a westerner gives you over people in developing countries, or the power dynamics in friendships where one person is more successful than the other, I expect that thought process to feel kinda like BDSM.)
  • Trying to evoke and deal with your shadow is a difficult and fraught process, since (by definition) it involves grappling with the parts of yourself that you're most ashamed about and most scared of giving control to. I recommend doing so gradually and carefully. My most direct engagement with shadow work was regrettably intense (analogous to a bad psychedelic trip) and came very close to having very bad effects on my life (though I've now wrestled those effects into a positive direction, and find shadow work very valuable on an ongoing basis).
  • As you can probably infer, most of the points above are informed by my own past and ongoing experiences.
Reply111
Agent foundations: not really math, not really science
Richard_Ngo3d20

This is the closest thing I have.

This is also relevant, about how the "alignment/capabilities" distinction is better understood as a "science/engineering" distinction.

Reply
Ethical Design Patterns
Richard_Ngo17d41

In my ontology "virtues" are ethical design patterns about how to make decisions.

I'm a virtue ethicist because I think that this kind of ethical design pattern is more important than ethical design patterns about what decisions to make (albeit with some complications that I'll explore in some upcoming posts).

(Having said that, I feel some sense that I'm not going to use "ethical design patterns" very much going forward—it's a little unwieldy as a phrase. I think I will just use "ethics", by contrast with things like "altruism" which IMO are less well-understood as design patterns.)

Reply
Agent foundations: not really math, not really science
Richard_Ngo21d40

Note that I've changed my position dramatically over the last few years, and now basically endorse something very close to what I was calling "rationality realism" (though I'd need to spend some time rereading the post to figure out exactly how close my current position is).

In particular, I think that we should be treating sociology, ethics and various related domains much more like we treat physics.

I also endorse this quote from a comment above, except that I wouldn't call it "thinking studies" but maybe something more like "the study of intelligent agency" (and would add game theory as a central example):

there is a rich field of thinking-studies. it’s like philosophy, math, or engineering. it includes eg Chomsky's work on syntax, Turing’s work on computation, Gödel’s work on logic, Wittgenstein’s work on language, Darwin's work on evolution, Hegel’s work on development, Pascal’s work on probability, and very many more past things and very many more still mostly hard-to-imagine future things

Reply
Safety researchers should take a public stance
Richard_Ngo1mo375

FWIW I used to agree with you but now agree with Nate. A big part of the update was developing a model of how "PR risks" work via a kind of herd mentality, where very few people are actually acting on their object-level beliefs, and almost everyone is just tracking what everyone else is tracking.

In such a setting, "internal influence" strategies tend to do very little long-term, and maybe even reinforce the taboo against talking honestly. This is roughly what seems to have happened in DC, where the internal influence approach was swept away by a big Overton window shift after ChatGPT. Conversely, a few principled individuals can have a big influence by speaking honestly (here's a post about the game theory behind this).

In my own case, I felt a vague miasma of fear around talking publicly while at OpenAI (and to a lesser extent at DeepMind), even though in hindsight there were often no concrete things that I endorsed being afraid of—for example, there was a period where I was roughly indifferent about leaving OpenAI, but still scared of doing things that might make people mad enough to fire me.

I expect that there's a significant inferential gap between us, so this is a hard point to convey, but one way that I might have been able to bootstrap my current perspective from inside my "internal influence" frame is to try to identify possible actions X such that, if I got fired for doing X, this would be a clear example of the company leaders behaving unjustly. Then even the possible "punishment" for doing X is actually a win.

Reply
Obligated to Respond
Richard_Ngo1mo220

"consistent with my position above I'd bet that in the longer term we'd do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead."

Would you have pressed this button at every other point throughout history too? If not, when's the earliest you would have pressed it?

Reply1
Richard Ngo's Shortform
Richard_Ngo1mo30

Good question. One answer is that my reset mechanisms involve cultivating empathy, and replacing fear with positive motivation. If I notice myself being too unempathetic or too fear-driven, that's worrying.

But another answer is just that, unfortunately, the reality distortion fields are everywhere—and in many ways more prevalent in "mainstream" positions (as discussed in my post). Being more mainstream does get you "safety in numbers"—i.e. it's harder for you to catalyze big things, for better or worse. But the cost is that you end up in groupthink.

Reply1
Richard Ngo's Shortform
Richard_Ngo1mo7-2

I like this comment.

For the sake of transparency, while in this post I'm mostly trying to identify a diagnosis, in the longer term I expect to try to do political advocacy as well. And it's reasonable to expect that people like me who are willing to break the taboo for the purposes of diagnosis will be more sympathetic to ethnonationalism in their advocacy than people who aren't. For example, I've previously argued on twitter that South Africa should have split into two roughly-ethnonationalist states in the 90s, instead of doing what they actually did.

However, I expect that the best ways of fixing western countries won't involve very much ethnonationalism by historical standards, because it's a very blunt tool. Also, I suspect that breaking the taboo now will actually lead to less ethnonationalism in the long term. For example, even a little bit more ethnonationalism would plausibly have made European immigration policies much less insane over the last few decades, which would then have prevented a lot of the political polarization we're seeing today.

Reply
Load More
163Underdog bias rules everything around me
2mo
53
61On Pessimization
2mo
3
64Applying right-wing frames to AGI (geo)politics
3mo
25
35Well-foundedness as an organizing principle of healthy minds and societies
6mo
7
99Third-wave AI safety needs sociopolitical thinking
7mo
23
96Towards a scale-free theory of intelligent agency
Ω
7mo
Ω
45
92Elite Coordination via the Consensus of Power
7mo
15
253Trojan Sky
7mo
39
214Power Lies Trembling: a three-book review
6mo
29
245The Gentle Romance
9mo
46
Load More