LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Curated. This concept seems like an important building block for designing incentive structures / societies, and this seems like a good comprehensive reference post for the concept.
Note: it looks like you probably want this to be a markdown file. You can go to https://www.lesswrong.com/account, with the "site customizations" section, and click "activate Markdown" to enable the markdown editor.
Fyi I think it’s time to do minor formatting adjustments to make papers/abstracts easier to read on LW
I think there might be a bit of a (presumably unintentional) motte and bailey here where the motte is "careful conceptual thinking might be required rather than pure naive empiricism (because we won't be given good enough test beds by default) and it seems like Anthropic (leadership) might fail heavily at this" and the bailey is "extreme philosophical competence (e.g. 10-30 years of tricky work) is pretty likely to be needed".
Yeah I agree that was happening somewhat. The connecting dots here are "in worlds where it turns out we need a long Philosophical Pause, I think you and Buck would probably be above some threshold where you notice and navigate it reasonably."
I think my actual belief is "the Motte is high likelihood true, the Bailey is... medium-ish likelihood true, but, like, it's a distribution, there's not a clear dividing line between them"
I also think the pause can be "well, we're running untrusted AGIs and ~trusted pseudogeneral LLM-agents that help with the philosophical progress, but, we can't run them that long or fast, they help speed things up and make what'd normally be a 10-30 year pause into a 3-10 year pause, but also the world would be going crazy left to it's own devices, and the sort of global institutional changes necessary are still similarly-outside-of-overton window as a 20 year global moratorium and the "race with China" rhetoric is still bad.
Thanks for laying this out thus far. I'mma reply but understand if you wanna leave the convo here . I would be interested in more effortpost/dialogue about your thoughts here.
Yes, my reasoning is definitely part, but not all of the argument. Like the thing I said is a sufficient crux for me. (If I thought we had to directly use human labor to align AIs which were qualitatively wildly superhuman in general I would put much more weight on "extreme philosophical competence".)
This makes sense as a crux for the claim "we need philosophical competence to align unboundedly intelligent superintelligences." But, it doesn't make sense for the claim "we need philosophical competence to align general, openended intelligence." I suppose my OP didn't really distinguish these claims and there were a few interpretations of how the arguments fit together. I was more saying the second (although to be fair I'm not sure I was actually distinguishing them well in my head until now)
It doesn't make sense for "we just' need to be able to hand off to an AI which is seriously aligned" to be a crux for the second. A thing can't be a crux for itself.
I notice my "other-guy-feels-like-they're-missing-the-point" -> "check if I'm not listening well, or if something is structurally wrong with the convo" alarm is firing, so maybe I do want to ask for one last clarification on "did you feel like you understood this the first time? Does it feel like I'm missing the point of what you said? Do you think you understand why it feels to me like you were missing the point (even if you think it's because I'm being dense about something?)
Meanwhile, here's some takes based on my current understanding of your proposal.
These bits:
We need to ensure that our countermeasures aren't just shifting from a type of misalignment we can detect to a type we can't. Qualitatively analyzing the countermeasures and our tests should help here.
...is a bit I think is philosophical-competence bottlenecked. And this bit:
"Actually, we didn't have any methods available to try which could end up with a model that (always) isn't egregiously misaligned. So, even if you can iterate a bunch, you'll just either find that nothing works or you'll just fool yourself."
...is a mix of "philosophically bottlenecked" and "rationality bottlenecked." (i.e. you both have to be capable of reasoning about whether you've found things that really worked, and, because there are a lot of degrees of freedom, capable of noticing if you're deploying that reasoning accurately)
I might buy that you and Buck are competent enough here to think clearly about it (not sure. I think you benefit from having a number of people around who seem likely to help), but I would bet against Anthropic decisionmakers being philosophically competent enough.
(I think at least some people on the alignment science or interpretability teams might be. I bet against the median such teammembers being able to navigate it. And ultimately, what matters is "does Anthropic leadership go forward with the next training run", so it matters whether Anthropic leadership buys arguments from hypothetically-competent-enough alignment/interpretability people. And Anthropic leadership already seem to basically be ignoring arguments of this type, and I don't actually expect to get the sort of empirical clarity that (it seems like) they'd need to update before it's too late.)
Second, we can study how generalization on this sort of thing works in general
I think this counts as the sort of empiricism I'm somewhat optimisic about in my post. i.e. if you are able to find experiments that actually give you evidence about deeper laws, that let you then make predictions about new Actually Uncertain questions of generalization that you then run more experiments on... that's the sort of thing I feel optimistic about. (Depending on the details, of course)
But, you still need technical philosophical competence to know if you're asking the right questions about generalization, and to know when the results actually imply that the next scale-up is safe.
FYI I found this intro fairly hard to read – partly due to generally large blocks of text (see: Abstracts should be either Actually Short™, or broken into paragraphs) and also because it just... doesn't actually really say what the main point is, AFAICT. (It describes a bunch of stuff you do, but I had trouble finding the actual main takeaway, or primary sorts of new information I might get by reading it)
I don't really see why this is a crux. I'm currently at like ~5% on this claim (given my understanding of what you mean), but moving to 15% or even 50% (while keeping the rest of the distribution the same) wouldn't really change my strategic orientation. Maybe you're focused on getting to a world with a more acceptable level of risk (e.g., <5%), but I think going from 40% risk to 20% risk is better to focus on.
I think you kinda convinced me here this reasoning isn't (as stated) very persuasive.
I think my reasoning had some additional steps like:
I'm pretty skeptical of the "extreme philosophical competence" perspective. This is basically because we "just" need to be able to hand off to an AI which is seriously aligned (e.g., it faithfully pursues our interests on long open-ended and conceptually loaded tasks that are impossible for use to check).
The "extreme philosophical competence" hypothesis is that you need such competence to achieve "seriously aligned" in this sense. It sounds like you disagree, but I don't know why since your reasoning just sidesteps the problem.
Looking over the comments of the first joshc post, it seems like that's also basically asserted it wasn't necessary by fiat. And, the people who actively believe in "alignment is philosophically loaded" showed up to complain that this ignored the heart of the problem.
My current summary of the arguments (which I put ~60% on, and I think Eliezer/Oli/Wentworth treat much more confidently and maybe believe a stronger version of) are something like:
I think my current model of you (Ryan) is like:
"Training models to do specific things, cleverly, actually just makes it pretty hard for them to develop scheming or other motivated misalignments – they have to jump all the way from "don't think about scheming ever" to "secretly think about scheming" to avoid getting caught, and that probably just won't work?"
(or, in example of the second joshC post, they need to learn to be really reliably good at truth-tracking-patterns and articulating their reasoning, after internalizing that for thousands of reps, an AI is just gonna have a hard time jumping to reasoning that isn't truth tracking).
I don't have a clear model of how you respond to point #4 – that we'll just reliably find them to be scheming if we succeed at the interpretability steps, and not have a good way of dealing with it. (Maybe you just don't think this is as overwhelmingly likely?)
Interested in whatever Real You's cruxes are, 1-2 steps removed.
Thanks. I'll probably reply to different parts in different threads.
For the first bit:
My guess is that the parts of the core leadership of Anthropic which are thinking actively about misalignment risks (in particular, Dario and Jared) think that misalignment risk is like ~5x smaller than I think it is while also thinking that risks from totalitarian regimes are like 2x worse than I think they are. I think the typical views of opinionated employees on the alignment science team are closer to my views than to the views of leadership. I think this explains a lot about how Anthropic operates.
The rough number you give are helpful. I'm not 100% sure I see the dots you're intending to connect with "leadership thinks 1/5-ryan-misalignment and 2x-ryan-totalitariansm" / "rest of alignment science team closer to ryan" -> "this explains a lot."
Is this just the obvious "whelp, leadership isn't bought into this risk model and call most of the shots, but in conversations with several employees that engage more with misalignment?". Or was there a more specific dynamic you thought it explained?
It’s unclear to me what the current evidence is for this happening ‘a lot’ and ‘them being called Nova specifically’. I don’t particularly doubt it but it seemed sort of asserted without much background.