My biggest counterargument to the case that AI progress should be slowed down comes from an observation made by porby about a fundamental lack of a property we theorize about AI systems, and the one foundational assumption around AI risk:
Instrumental convergence, and it's corollaries like powerseeking.
The important point is that current and most plausible future AI systems don't have incentives to learn instrumental goals, and the type of AI that has enough space and has very few constraints, like RL with sufficiently unconstrained action spaces to learn instrumental goals is essentially useless for capabilities today, and the strongest RL agents use non-instrumental world models.
Thus, instrumental convergence for AI systems is fundamentally wrong, and given that this is the foundational assumption of why superhuman AI systems pose any risk that we couldn't handle, a lot of other arguments for why we might to slow down AI, why the alignment problem is hard, and a lot of other discussion in the AI governance and technical safety spaces, especially on LW become unsound, because they're reasoning from an uncertain foundation, and at worst are reasoning from a false premise to reach many false conclusions, like the argument that we should reduce AI progress.
Fundamentally, instrumental convergence being wrong would demand pretty vast changes to how we approach the AI topic, from alignment to safety and much more to come,
To be clear, the fact that I could only find a flaw within AI risk arguments because they were founded on false premises is actually better than many other failure modes, because it at least shows fundamentally strong locally valid reasoning on LW, rather than motivated reasoning or other biases that transforms true statements into false statements.
One particular case of the insight is that OpenAI and Anthropic were fundamentally right in their AI alignment plans, because they have managed to avoid instrumental convergence from being incentivized, and in particular LLMs can be extremely capable without being arbitrarily capable or having instrumental world models given resources.
I learned about the observation from this post below:
https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty
Porby talks about why AI isn't incentivized to learn instrumental goals, but given how much this assumption gets used in AI discourse, sometimes implicitly, I think it's of great importance that instrumental convergence is likely wrong.
I have other disagreements, but this is my deepest disagreement with your model (and other models around AI is especially dangerous).
EDIT: A new post on instrumental convergence came out, and it showed that many of the inferences made weren't just unsound, but invalid, and in particular Nick Bostrom's Superintelligence was wildly invalid in applying instrumental convergence to strong conclusions on AI risk.
I'd say that we'd have a 70-80% chance of going through the next decade without causing a billion deaths if powerful AI comes.
My other explanation probably has to do with the fact that it's way easier to work with an already almost-executed object than a specification, because we are constrained to only think about a subset of possibilities for a reasonable time.
In other words, constraints are useful given that you are already severely constrained, to limit the space of possibilities.
My guess is that for now, I'd give around a 10-30% chance to "AI winter happens for a short period/AI progress slows down" by 2027.
Also, what would you consider super surprising new evidence?
This seems the likely explanation for any claim that constraints breed creativity/good things in a field, when the expectation is that the opposite outcome would occur.
To answer these questions:
A world where we've reliably "solved" for x-risks well enough to survive thousands of years without also having meaningfully solved "moral philosophy" is probably physically realizable, but this seems like a pretty fine needle to thread from our current position. (I think if you have a plan for solving AI x-risk that looks like "get to ~human-level AI, pump the brakes real hard, and punt on solving ASI alignment" then maybe you disagree.)
I don't think it takes today-humans a thousand years to come up with a version of indirect normativity (or CEV, or whatever) that actually just works correctly. I'd be somewhat surprised if it took a hundred, but maybe it's actually very tricky. A thousand just seems crazy. A million makes it sound like you're doing something very dumb, like figuring out every shard of each human's values and don't know how to automate things.
1 possible answer is that something like CEV does not exist, and yet alignment is still solvable anyways for almost arbitrarily capable AI, which could well happen, and for me personally this is honestly the most likely outcome of what happens by default.
There are arguments against the idea that CEV even exists or is well defined that are important to note, and we shouldn't assume that technological progress equates with progress towards your preferred philosophy:
https://joecarlsmith.com/2021/06/21/on-the-limits-of-idealized-values
And there might not be any real justifiable way to resolve disagreements between the philosophies/moralities, either, if there isn't a way to converge to a single morality.
I definitely agree that a lot of the purported capabilities for scheming could definitely be because of the prompts talking about AIs/employees in context, and a big problem for basically all capabilities evaluations at this point is that with a few exceptions, AIs are basically unable to do anything that doesn't have to do with language, and this sort of evaluation is plausibly plagued by this.
2 things to say here:
This is still a concerning thing if it keeps holding, and for more capable models, would look exactly like scheming, because the data are a lot of what makes it meaningful to talk about a simulacra from a model being aligned.
This does have an expensive fix, but OpenAI might not pay those costs.
One other plausible theory of the results is that conditional on the results being true, this might not represent deep learning hitting a wall, but rather the leaving of key people like Ilya Sutskever meaning that OpenAI has lost it's mojo and ability to scale well:
https://www.reddit.com/r/mlscaling/comments/1djoqjh/comment/l9uogp9/
That is to say, a prepotent AI system whose prepotence was not recognized by its developers is highly likely to be misaligned as well.
I agree that it would be bad news from an evidential perspective if we misjudged the AI capabilities such that it's perceived to have lower capabilities as well, but the variables of misjudging capabilities and the level of misalignment is in principle independent of each other and not correlated, so I wonder what's going on here.
Alright, now that I've read this post, I'll try to respond to what I think you got wrong, and importantly illustrate some general principles.
To respond to this first:
I think this is actually wrong, because of synthetic data letting us control what the AI learns and what they value, and in particular we can place honeypots that are practically indistingushiable from the real world, such that if we detected an AI trying to deceive or gain power, the AI almost certainly doesn't know whether we tested it or whether it's in the the real world:
It's the same reason for why we can't break out of the simulation IRL, except we don't have to face adversarial cognition, so the AI's task is even harder than our task.
See also this link:
https://www.beren.io/2024-05-11-Alignment-in-the-Age-of-Synthetic-Data/
For this:
I think this is wrong, and a lot of why I disagree with the pivotal act framing is probably due to disagreeing with the assumption that future technology will be radically biased towards to offense, and while I do think biotechnology is probably pretty offense-biased today, I also think it's tractable to reduce bio-risk without trying for pivotal acts.
Also, I think @evhub's point about homogeneity of AI takeoff bears on this here, and while I don't agree with all the implications, like there being no warning shot for deceptive alignment (because of synthetic data), I think there's a point in which a lot of AIs are very likely to be very homogenous, and thus break your point here:
https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios
I think that AGIs are more robust to things going wrong than nuclear cores, and more generally I think there is much better evidence for AI robustness than fragility.
@jdp's comment provides more evidence on why this is the case:
Link here:
https://www.lesswrong.com/posts/JcLhYQQADzTsAEaXd/?commentId=7iBb7aF4ctfjLH6AC
I think that there will be generalization of alignment, and more generally I think that alignment generalizes further than capabilities by default, contra you and Nate Soares because of these reasons:
See also this link for more, but I think that's the gist for why I expect AI alignment to generalize much further than AI capabilities. I'd further add that I think evolutionary psychology got this very wrong, and predicted much more complex and fragile values in humans than is actually the case:
https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/
This is covered by my points on why alignment generalizes further than capabilities and why we don't need pivotal acts and why we actually have safe testing grounds for deceptive AI.
Re the sharp capability gain breaking alignment properties, one very crucial advantage we have over evolution is that our goals are much more densely defined, constraining the AI more than evolution, where very, very sparse reward was the norm, and critically sparse-reward RL does not work for capabilities right now, and there are reasons to think it will be way less tractable than RL where rewards are more densely specified.
Another advantage we have over evolution, and chimpanzees/gorillas/orangutans is far, far more control over their data sources, which strongly influences their goals.
This is also helpful to point towards more explanation of what the differences are between dense and sparse RL rewards:
Yeah, I covered this above, but evolution's loss function was neither that simple, compared to human goals, and it was ridiculously inexact compared to our attempts to optimize AIs loss functions, for the reasons I gave above.
I've answered that concern above in synthetic data for why we have the ability to get particular inner behaviors into a system.
The points were covered above, but synthetic data early in training + densely defined reward/utility functions = alignment, because they don't know how to fool humans when they get data corresponding to values yet.
The key is that data on values is what constrains the choice of utility functions, and while values aren't in physics, they are in human books, and I've explained why alignment generalizes further than capabilities.
I think that there is actually a simple core of alignment to human values, and a lot of the reasons for why I believe this is because I believe about 80-90%, if not more of our values is broadly shaped by the data, and not the prior, and that the same algorithms that power our capabilities is also used to influence our values, though the data matters much more than the algorithm for what values you have.
More generally, I've become convinced that evopsych was mostly wrong about how humans form values, and how they get their capabilities in ways that are very alignment relevant.
I also disbelieve the claim that humans had a special algorithm that other species don't have, and broadly think human success was due to more compute, data and cultural evolution.
Alright, while I think your formalizations of corrigibility failed to get any results, I do think there's a property close to corrigibility that is likely to be compatible with consequentialist reasoning, and that's instruction following, and there are reasons to think that instruction following and consequentialist reasoning go together:
https://www.lesswrong.com/posts/7NvKrqoQgJkZJmcuD/instruction-following-agi-is-easier-and-more-likely-than
https://www.lesswrong.com/posts/ZdBmKvxBKJH2PBg9W/corrigibility-or-dwim-is-an-attractive-primary-goal-for-agi
https://www.lesswrong.com/posts/k48vB92mjE9Z28C3s/implied-utilities-of-simulators-are-broad-dense-and-shallow
https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty
https://www.lesswrong.com/posts/vs49tuFuaMEd4iskA/one-path-to-coherence-conditionalization
I'm very skeptical that a CEV exists for the reasons @Steven Byrnes addresses in the Valence sequence here:
https://www.lesswrong.com/posts/SqgRtCwueovvwxpDQ/valence-series-2-valence-and-normativity#2_7_Moral_reasoning
But it is also unnecessary for value learning, because of the data on human values and alignment generalizing farther than capabilities.
I addressed why we don't need a first try above.
For the point on corrigibility, I disagree that it's like training it to say that as a special case 222 + 222 = 555, for 2 reasons:
I disagree with this, but I do think that mechanistic interpretability does have lots of work to do.
The key disagreement is I believe we don't need to check all the possibilities, and that even for smarter AIs, we can almost certainly still verify their work, and generally believe verification is way, way easier than generation.
I basically disagree with this, both in the assumption that language is very weak, and importantly I believe no AGI-complete problems are left, for the following reasons quoted from Near-mode thinking on AI:
https://www.lesswrong.com/posts/ASLHfy92vCwduvBRZ/near-mode-thinking-on-ai
To address an epistemic point:
You cannot actually do this and hope to get any quality of reasoning, for the same reason that you can't update on nothing/no evidence.
The data matters way more than you think, and there's no algorithm that can figure out stuff with 0 data, and Eric Drexler didn't figure out nanotechnology using the null string as input.
This should have been a much larger red flag for problems, but people somehow didn't realize how wrong this claim was.
And that's the end of my very long comment on the problems with this post.