Inasmuch as you are actually trying to have a conversation with Neel or address Neel's argument on its merits, it would be good to be clear that this is the crux.
The first two paragraphs of my original comment were trying to do this. The rest wasn't. I flagged this in the sentence "The rest of my comment isn't directly about this post, but close enough that this seems like a reasonable place to put it." However, I should have been clearer about the distinction. I've now added the following:
EDIT: to be more clear: the rest of this comment is not primarily about Neel or "pragmatic interpretability", it's about parts of the field that I consider to be significantly less relevant to "solving alignment" than that (though work that's nominally on pragmatic interpretability could also fall into the same failure modes). I clarify my position further in this comment; thanks Rohin for the pushback.
Reflecting further, I think there are two parts of our earlier exchange that are a bit suspicious. The first is when I say that everyone seems to have "given up" (rather than something more nuanced like "given up on tackling the most fundamental aspects of the problem"). The second is where you summarize my position as being that we need deep scientific understanding or else everyone dies (which I think you can predict is a pretty unlikely position for me in particular to hold).
So what's going on here? It feels like we're both being "anchored" by extreme positions. You were rounding me off to doomerism, and I was rounding the marginalists off to "giving up". Both I'd guess are artifacts of writing quickly and a bit frustratedly. Probably I should write a full post or shortform that characterizes more precisely what "giving up" is trying to point to.
(Incidentally, I feel like you still aren't quite pinning down your position -- depending on what you mean by "reliably" I would probably agree with "marginalist approaches don't reliably improve things". I'd also agree with "X doesn't reliably improve things" for almost any interesting value of X.)
My instinctive reaction is that this depends a lot on whether by "marginalist approaches" we mean something closer to "a single marginalist approach" or "the set of all people pursuing marginalist approaches". I think we both agree that no single marginalist approach (e.g. investigating a given technique) makes reliable progress. However, I'd guess that I'm more willing than you to point to a broad swathe of people pursuing marginalist approaches and claim that they won't reliably improve things.
I expect it's not worth our time to dig too deep into whose position is more common here. But I think that a lot of people on LW have high P(doom) in significant part because they share my intuition that marginalist approaches don't reliably work. I do agree that my combination of "marginalist approaches don't reliably improve things" and "P(doom) is <50%" is a rare one, but I was only making the former point above (and people upvoted it accordingly), so it feels a bit misleading to focus on the rareness of the overall position.
(Interestingly, while the combination I describe above is a rare one, the converse is also rare—Daniel Kokotajlo is the only person who comes to mind who disagrees with me on both of these propositions simultaneously. Note that he doesn't characterize his current work as marginalist, but even aside from that question I think this characterization of him is accurate—e.g. he has talked to me about how changing the CEO of a given AI lab could swing his P(doom) by double digit percentage points.)
I agree with this statement denotatively, and my own interests/work have generally been "driven by open-ended curiosity and a drive to uncover deep truths", but isn't this kind of motivation also what got humanity into its current mess? In other words, wasn't the main driver of AI progress this kind of curiosity (until perhaps the recent few years when it has been driven more by commercial/monetary/power incentives)?
Interestingly, I was just having a conversation with Critch about this. My contention was that, in the first few decades of the field, AI researchers were actually trying to understand cognition. The rise of deep learning (and especially the kind of deep learning driven by massive scaling) can be seen as the field putting that quest on hold in order to optimize for more legible metrics.
I don't think you should find this a fully satisfactory answer, because it's easy to "retrodict" ways that my theory was correct. But that's true of all explanations of what makes the world good at a very abstract level, including your own answer of metaphilosophical competence. (Also, we can perhaps cash my claim out in predictions, like: was a significant barrier to more researchers working on deep learning the criticism that it didn't actually provide good explanations of or insight into cognition? Without having looked it up, I suspect so.)
consistently good strategy requires a high amount of consequentialist reasoning
I don't think that's true. However I do think it requires deep curiosity about what good strategy is and how it works. It's not a coincidence that my own research on a theory of coalitional agency was in significant part inspired by strategic failures of EA and AI safety (with this post being one of the earliest building blocks I laid down). I also suspect that the full theory of coalitional agency will in fact explain how to do metaphilosophy correct, because doing good metaphilosophy is ultimately a cognitive process and can therefore be characterized by a sufficiently good theory of cognition.
Again, I don't expect you to fully believe me. But what I most want to read from you right now is an in-depth account of which the things in the world have gone or are going most right, and the ways in which you think metaphilosophical competence or consequentialist reasoning contributed to them. Without that, it's hard to trust metaphilosophy or even know what it is (though I think you've given a sketch of this in a previous reply to me at some point).
I should also try to write up the same thing, but about how virtues contributed to good things. And maybe also science, insofar as I'm trying to defend doing more science (of cognition and intelligence) in order to help fix risks caused by previous scientific progress.
In trying to reply to this comment I identified four "waves" of AI safety, and lists of the central people in each wave. Since this is socially complicated I'll only share the full list of the first wave here, and please note that this is all based on fuzzy intuitions gained via gossip and other unreliable sources.
The first wave I’ll call the “founders”; I think of them as the people who set up the early institutions and memeplexes of AI safety before around 2015. My list:
The second wave I’ll call the “old guard”; those were the people who joined or supported the founders before around 2015. A few central examples include Paul Christiano, Chris Olah, Andrew Critch and Oliver Habryka.
Around 2014/2015 AI safety became significantly more professionalized and growth-oriented. Bostrom published Superintelligence, the Puerto Rico conference happened, OpenAI was founded, DeepMind started a safety team (though I don't recall exactly when), and EA started seriously pushing people towards AI safety. I’ll call the people who entered the field from then until around 2020 "safety scalers" (though I'm open to better names). A few central examples include Miles Brundage, Beth Barnes, John Wentworth, Rohin Shah, Dan Hendrycks and myself.
And then there’s the “newcomers” who joined in the last 5-ish years. I have a worse mental map of these people, but some who I respect are Leo Gao, Sahil, Marius Hobbhahn and Jesse Hoogland.
In this comment I expressed concern that my generation (by which I mean the "safety scalers") have kinda given up on solving alignment. But another higher-level concern is: are people from these last two waves the kinds of people who would have been capable of founding AI safety in the first place? And if not, where are those people now? Of course there's some difference in the skills required for founding a field vs pushing the field forward, but to a surprising extent I keep finding that the people who I have the most insightful conversations with are the ones who were around from the very beginning. E.g. I think Vassar is the single person doing the best thinking about the lessons we can learn about failures of AI safety over the last decade (though he's hard to interface with), and Yudkowsky is still the single person who's most able to push the Overton window towards taking alignment seriously (even though in principle many other people could have written (less doomy versions of) his Time op-ed or his recent book), Scott is still the single best blogger in the space, and so on.
Relatedly, when I talk to someone who's exceptionally thoughtful about politics (and particularly the psychological aspects of politics), a disturbingly large proportion of the time it turns out that worked at (or were somehow associated with) Leverage. This is really weird to me. Maybe I just have Leverage-aligned tastes/networks, but even so, it's a very striking effect. (Also, how come there's no young Moldbug?)
Assuming that I'm gesturing at something real, what are some possible explanations?
This is all only a rough gesture at the phenomenon, and you should be wary that I'm just being pessimistic rather than identifying something important. Also it's a hard topic to talk about clearly because it's loaded with a bunch of social baggage. But I do feel pretty confused and want to figure this stuff out.
Yepp, makes sense, and it's a good reminder for me to be careful about how I use these terms.
One clarification I'd make to your original comment though is that I don't endorse "you have to deeply understand intelligence from first principles else everyone dies". My position is closer to "you have to be trying to do something principled in order for your contribution to be robustly positive". Relatedly, agent foundations and mech-interp are approximately the only two parts of AI safety that seem robustly good to me—with a bunch of other stuff like RLHF, or evals, or (almost all) governance work, I feel pretty confused about whether they're good or bad or basically just wash out even in expectation.
This is still consistent with risk potentially being reduced by what I call engineering-type work, it's just that IMO that involves us "getting lucky" in an important way which I prefer we not rely on. (And trying to get lucky isn't a neutral action—engineering-type work can also easily have harmful effects.)
I agree that there are some ways in which my comment did not meet the standard that I was holding your post to. I think this is defensible because I hold things to higher standards when they're more prominent (e.g. posts versus shortforms or comments), and also because I hold things to higher standards when they're making stronger headline claims. In my case, my headline claim was "I feel confused". If I had instead made the headline claim "Mikhail is untrustworthy", then I think it would have been very reasonable for you to be angry at this.
I think that my criticism contains some moves that I wish your criticism had more of. In particular, I set a standard for what I wanted from your criticism:
I think of good critiques as trying to identify standards of behavior that should be met, and comparing people or organizations to those standards, rather than just throwing accusations at them.
and provide a central example of you not meeting this standard:
"Anthropic is untrustworthy" is an extremely low-resolution claim
I also primarily focused on drawing conclusions about the post itself (e.g. "My overall sense is that people should think of the post roughly the way they think of a compilation of links") and relegate the psychologizing to the end. I accept that you would have preferred that I skip it entirely, but it's a part of "figuring out what's up with Mikhail", which is an epistemic move that I endorse people doing after they've laid out a disagreement (but not as a primary approach to that disagreement).
Some examples of statements where it's pretty hard for me to know how much the statements straightforwardly follow from the evidence you have, vs being things that you've inferred because they seem plausible to you:
If we zoom in on #3, for instance: there's a sense in which it's superficially plausible because both OpenAI and Anthropic have products. But maybe Anthropic and OpenAI differ greatly on, say, the ratio of headcount, or the ratio of executives' time, or the amount of compute, or the internal prestige allocated to commercialization vs other things (like alignment research). If so, then it's not really accurate to say that they're just as focused on commercialization. But I don't know if knowledge of these kinds of considerations informed your claim, or if you're only making the superficially plausible version of the claim.
To be clear, in general I don't expect people to apply this level of care for most LW posts. But when it comes to accusations of untrustworthiness (and similar kinds of accountability mechanisms) I think it's really valuable to be able to create common knowledge of the specific details of misbehavior. Hence I would have much preferred this post to focus on a smaller set of claims that you can solidly substantiate, and then only secondarily try to discuss what inferences we should draw from those. Whereas I think that the kinds of criticism you make here mostly create a miasma of distrust between Anthropic and LessWrong, without adding much common knowledge of the form "Anthropic violated clear and desirable standard X" for the set of good-faith AI safety actors.
I also realize that by holding this standard I'm making criticism more costly, because now you have the stress of trying to justify yourself to me. I would have tried harder to mitigate that cost if I hadn't noticed this pattern of not-very-careful criticism from you. I do sympathize with your frustration that people seem to be naively trusting Anthropic and ignoring various examples of shady behavior. However I also think people outside labs really underestimate how many balls lab leaders have up in the air at once, and how easy it is to screw up a few of them even if you're broadly trustworthy. I don't know how to balance these considerations, especially because the community as a whole has historically erred on the side of the former mistake. I'd appreciate people helping me think through this, e.g. by working through models of how applying pressure to bureaucratic organizations goes successfully, in light of the ways that such organizations become untrustworthy (building on Zvi's moral mazes sequence for instance).
I regret using the word "marginalist", it's a bit too confusing. But I do have a pretty high bar for what counts as "ambitious" in the political domain—it involves not just getting the system to do something, but rather trying to change the system itself. Cummings and Thiel are central examples (Geoff Anders maybe also was aiming in that direction at one point).
I think me using the word "marginalist" was probably a mistake, because it conflates two distinct things that I'm skeptical about:
The list I gave above was of things that fall into category 1, whereas (almost?) all of the things you named fall into category 2. What I want more of is category 3: science-type approaches. One indicator that something is a science-type approach is that it could potentially help us understand something fundamental about intelligence; another is that, if it works, we'll know in advance (I used to not care about this, but have changed my mind).
I think there are versions of most of the things you named that could be in category 3, but people mostly seem to be doing category-2 versions of them, in significant part because of the sort of EA-style reasoning that I was criticizing from Neel's original post.
When I wrote "pragmatic interpretability feels like another step in that direction" I meant something like: ambitious interpretability was trying to do 3, and pragmatic interpretability seems like it's nominally trying to do 2, and may in practice end up being mostly 1. For example, "Stop models acting differently when tested" could be a part of an engineering-type pipeline for fixing misalignments in models, but could also end up drifting towards "help us get better evidence to convince politicians and lab leaders of things". However, I'm not claiming that pragmatic interpretability is a central example of "not even aspiring to be the type of thing that could solve alignment". Apologies for the bad phrasings.
If this is true, then it should significantly update us away from the strategy "solve our current problems by becoming more philosophically competent and doing good consequentialist reasoning", right? If you are very bad at X, then all else equal you should try to solve problems using strategies that don't require you to do much X.
You might respond that there are no viable strategies for solving our current problems without applying a lot of philosophical competence and consequentialist reasoning. I think scientific competence and virtue ethics are plausibly viable alternative strategies (though the line between scientific and philosophical competence seems blurry to me, as I discuss below). But even given that we disagree on that, humanity solved many big problems in the past without using much philosophical competence and consequentialist reasoning, so it seems hard to be confident that we won't solve our current problems in other ways.
Out of your examples, the influence of economics seems most solid to me. I feel confused about whether game theory itself made nuclear war more or less likely—e.g. von Neumann was very aggressive, perhaps related to his game theory work, and maybe MAD provided an excuse to stockpile weapons? Also the Soviets didn't really have the game theory IIRC.
On the analytical philosophy front, the clearest wins seem to be cases where they transitioned from doing philosophy to doing science or math—e.g. the formalization of probability (and economics to some extent too). If this is the kind of thing you're pointing at, then I'm very much on board—that's what I think we should be doing for ethics and intelligence. Is it?
Re the AI safety stuff: it all feels a bit too early to say what its effects on the world have been (though on net I'm probably happy it has happened).
Because I have various objections to this list (some of which are detailed above) and with such a succinct list it's hard to know which aspects of them you're defending, which arguments for their positive effects you find most compelling, etc.