Can you give some examples of people with vibrant will-to-Goodness?
My guess is that the people who are unusually disembodied that you're thinking of probably suppress a kind of contempt and/or anger at other people who don't have so much will-to-Goodness.
I think that properly understanding the psychology of BDSM might provide the key to understanding psychology in general (in ways that are pretty continuous with the insights of early pioneers of psychology, e.g. Freud and particularly Jung).
My current model is:
This is the closest thing I have.
This is also relevant, about how the "alignment/capabilities" distinction is better understood as a "science/engineering" distinction.
In my ontology "virtues" are ethical design patterns about how to make decisions.
I'm a virtue ethicist because I think that this kind of ethical design pattern is more important than ethical design patterns about what decisions to make (albeit with some complications that I'll explore in some upcoming posts).
(Having said that, I feel some sense that I'm not going to use "ethical design patterns" very much going forward—it's a little unwieldy as a phrase. I think I will just use "ethics", by contrast with things like "altruism" which IMO are less well-understood as design patterns.)
Note that I've changed my position dramatically over the last few years, and now basically endorse something very close to what I was calling "rationality realism" (though I'd need to spend some time rereading the post to figure out exactly how close my current position is).
In particular, I think that we should be treating sociology, ethics and various related domains much more like we treat physics.
I also endorse this quote from a comment above, except that I wouldn't call it "thinking studies" but maybe something more like "the study of intelligent agency" (and would add game theory as a central example):
there is a rich field of thinking-studies. it’s like philosophy, math, or engineering. it includes eg Chomsky's work on syntax, Turing’s work on computation, Gödel’s work on logic, Wittgenstein’s work on language, Darwin's work on evolution, Hegel’s work on development, Pascal’s work on probability, and very many more past things and very many more still mostly hard-to-imagine future things
FWIW I used to agree with you but now agree with Nate. A big part of the update was developing a model of how "PR risks" work via a kind of herd mentality, where very few people are actually acting on their object-level beliefs, and almost everyone is just tracking what everyone else is tracking.
In such a setting, "internal influence" strategies tend to do very little long-term, and maybe even reinforce the taboo against talking honestly. This is roughly what seems to have happened in DC, where the internal influence approach was swept away by a big Overton window shift after ChatGPT. Conversely, a few principled individuals can have a big influence by speaking honestly (here's a post about the game theory behind this).
In my own case, I felt a vague miasma of fear around talking publicly while at OpenAI (and to a lesser extent at DeepMind), even though in hindsight there were often no concrete things that I endorsed being afraid of—for example, there was a period where I was roughly indifferent about leaving OpenAI, but still scared of doing things that might make people mad enough to fire me.
I expect that there's a significant inferential gap between us, so this is a hard point to convey, but one way that I might have been able to bootstrap my current perspective from inside my "internal influence" frame is to try to identify possible actions X such that, if I got fired for doing X, this would be a clear example of the company leaders behaving unjustly. Then even the possible "punishment" for doing X is actually a win.
"consistent with my position above I'd bet that in the longer term we'd do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead."
Would you have pressed this button at every other point throughout history too? If not, when's the earliest you would have pressed it?
Good question. One answer is that my reset mechanisms involve cultivating empathy, and replacing fear with positive motivation. If I notice myself being too unempathetic or too fear-driven, that's worrying.
But another answer is just that, unfortunately, the reality distortion fields are everywhere—and in many ways more prevalent in "mainstream" positions (as discussed in my post). Being more mainstream does get you "safety in numbers"—i.e. it's harder for you to catalyze big things, for better or worse. But the cost is that you end up in groupthink.
I like this comment.
For the sake of transparency, while in this post I'm mostly trying to identify a diagnosis, in the longer term I expect to try to do political advocacy as well. And it's reasonable to expect that people like me who are willing to break the taboo for the purposes of diagnosis will be more sympathetic to ethnonationalism in their advocacy than people who aren't. For example, I've previously argued on twitter that South Africa should have split into two roughly-ethnonationalist states in the 90s, instead of doing what they actually did.
However, I expect that the best ways of fixing western countries won't involve very much ethnonationalism by historical standards, because it's a very blunt tool. Also, I suspect that breaking the taboo now will actually lead to less ethnonationalism in the long term. For example, even a little bit more ethnonationalism would plausibly have made European immigration policies much less insane over the last few decades, which would then have prevented a lot of the political polarization we're seeing today.
Thanks for engaging! There's a lot here I agree with—in particular, the concept of pessimization does seem like a dangerous one which could be used to demoralize people. I also think psychoanalyzing me is fair game here, and that it would be a big strike against the concept if I were using it badly.
I'm trying to figure out if there's some underlying crux here, and the part that gets closest to it is maybe:
I don't know how you feel about the concept of Moloch, but I think you could probably have written a pretty similar essay about that concept. In each individual case you could characterize a coordination failure as just an "ordinary failure", rather than a manifestation of the larger pattern that constitutes Moloch. And indeed your paragraph above is strikingly similar to my own critique of the concept of Moloch, which basically argues that Scott is comparing existing coordination failures to an imaginary world which has perfect coordination. I've also made similar critiques of Eliezer's concept of "civilizational inadequacy" as measuring down from perfection.
I think that the synthesis here is that neither pessimization nor Moloch nor "civilizational inadequacy" should be treated as sufficiently load-bearing that they should tell you what to do directly. In some sense all of these create awayness motivations: don't pessimize, don't be inadequate, don't let Moloch win. But as Malcolm Ocean points out, awayness motivations are very bad for steering. If your guiding principle is not to be inadequate, then you will probably not dream very big. If your guiding principle is not to pessimize, then people will probably just throw accusations of pessimization at each other until everything collapses into a big mess.
That's why I ended the post by talking about virtue ethics, and how it can be construed as a technology for avoiding pessimization. I want to end up in a place where people almost never say to each other "stop pessimizing", they instead say "be virtuous". But in order to argue for virtues as the solution to pessimization/the way to build the "imaginary version" of groups which don't make such unforced errors, I need to first point at one of the big problems they're trying to solve. It's also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.
I'm not 100% sure that this is the right synthesis, and will need to muse on it more, but I appreciate your push to get this clearer in my head (and on LessWrong).
Lastly, at risk of turning this political, the one thing I'll say about the "support Hamas" stuff is that there's a spectrum of what counts as "support", from literally signing up to fight for them to cheering them on to dogwhistling in support of them to just pushing for some of the same goals that they do to failing to condemn them. My contention is that there are important ways in which Hamas' lack of alignment with western values leads to more western support for them—e.g. the wave of pro-Palestine rallies immediately after they killed many civilians—which is what makes this an example of pessimization. Of course this is a dangerous kind of accusation because there's a lot of wiggle room in exactly what we mean by "lack of alignment", and distinctions between supporting Hamas itself vs supporting associated causes. I personally still think the effect is stark enough that my core point was correct, but I should have phrased it more carefully. (Note: I edited this paragraph a few mins after writing it, because the original version wasn't very thoughtful.)