And I suspect we probably can, given scaffolds like https://sakana.ai/ai-scientist/ and its likely improvements (especially if done carefully, e.g. integrating something like Redwood's control agenda, etc.). I'd be curious where you'd disagree (since I expect you probably would) - e.g. do you expect the AI scientists become x-risky before they're (roughly) human-level at safety research, or they never scale to human-level, etc.?
Jeremy's response looks to me like it mostly addresses the first branch of your disjunction (AI becomes x-risky before reaching human-level capabilities), so let me address the second:
I am unimpressed by the output of the AI scientist. (To be clear, this is not the same thing as being unimpressed by the work put into it by its developers; it looks to me like they did a great job.) Mostly, however, the output looks to me basically like what I would have predicted, on my prior model of how scaffolding interacts with base models, which goes something like this:
A given model has some base distribution on the cognitive quality of its outputs, which is why resampling can sometimes produce better or worse responses to inputs. What scaffolding does is to essentially act as a more sophisticated form of sampling based on redundancy: having the model check its own output, respond to that output, etc. This can be very crudely viewed as an error correction process that drives down the probability that a "mistake" at some early token ends up propagating throughout the entirety of the scaffolding process and unduly influencing the output, which biases the quality distribution of outputs away from the lower tail and towards the upper tail.
The key moving piece on my model, however, is that all of this is still a function of the base distribution—a rough analogy here would be to best-of-n sampling. And the problem with best-of-n sampling, which looks to me like it carries over to more complicated scaffolding, is that as n increases, the mean of the resulting distribution increases as a sublinear (actually, logarithmic) function of n, while the variance decreases at a similar rate (but even this is misleading, since the resulting distribution will have negative skew, meaning variance decreases more rapidly in the upper tail than in the lower tail).
Anyway, the upshot of all of this is that scaffolding cannot elicit capabilities that were not already present (in some strong sense) in the base model—meaning, if the base models in question are strongly subhuman at something like scientific research (which it presently looks to me like they still are), scaffolding will not bridge that gap for them. The only thing that can close that gap without unreasonably large amounts of scaffolding, where "unreasonable" here means something a complexity theorist would consider unreasonable, is a shifted base distribution. And that corresponds to the kind of "useful [superhuman] capabilities" Jeremy is worried about.
I'm interested! Also curious as to how this is implemented; are you using retrieval-augmented generation, and if so, with what embeddings?
Epistemic status: exploratory, "shower thought", written as part of a conversation with Claude:
For any given entity (broadly construed here to mean, essentially, any physical system), it is possible to analyze that entity as follows:
Define the set of possible future trajectories that entity might follow, according to some suitably uninformative ignorance prior on its state and (generalized) environment. Then ask, of that set, whether there exists some simple, obvious, or otherwise notable prior on the set in question, that assigns probabilities to various member trajectories in such a way as to establish an upper level set of some kind. Then ask, of that upper level set, how large it is relative to the size of the set as a whole, and (relatedly) how large the difference is between the probability of that upper set's least probable member, and its most probable nonmember. (If you want to conceptualize these sets as infinite and open—although it's unclear to me that one needs to conceptualize them this way—then you can speak instead of "infimum" and "supremum".)
The claim is that, for some specific kinds of entity, there will be quite a sharp difference between its upper level set and its lower level set, constituting a "plausibility gap": trajectories within the upper set are in some sense "plausible" ways of extrapolating the entity forward in time. And then the relative size of that upper set becomes relevant, because it indicates how tightly constrained the entity's time-evolution is by its present state (and environment). So, the claim is that there are certain systems for which their forwards time-evolution is very tightly constrained indeed, and these systems are "agents"; and there are systems for which barely any upper level set exists, and these are "simplistic" entities whose behavior is essentially entropic. And humans (seem to me to) occupy a median position between these two extremes.
One additional wrinkle, however, is that "agency", as I've defined it here, may additionally play the role of a (dynamical system) attractor: entities already close to having full agency will be more tightly constrained in their future evolution, generally in the direction of becoming ever more agentic; meanwhile, entirely inanimate systems are not at all pulled in the direction of becoming more constrained or agentic; they are outside of the agency attractor's basin of attraction. However, humans, if they indeed exist at some sort of halfway point between fully coherent agency and a complete lack of coherence, are left interestingly placed under this framing: we would exist at the boundary of the agency attractor's basin of attraction. And since many such boundaries are fundamentally fractal or chaotic in nature, that could have troubling implications for the trajectories of points along those boundaries trying to reach reflective equilibrium, as it were.
The rule of thumb test I tend to use to assess proposed definitions of agency (at least from around these parts) is whether they'd class a black hole as an agent. It's not clear to me whether this definition does; I would have said it very likely does based on everything you wrote, except for this one part here:
A cubic meter of rock has a persistent boundary over time, but no interior, states in an informational sense and therefore are not agents. To see they have no interior, note that anything that puts information into the surface layer of the rock transmits that same information into the very interior (vibrations, motion, etc).
I think I don't really understand what is meant by "no interior" here, or why the argument given supports the notion that a cubic meter of rock has no interior. You can draw a Markov boundary around the rock's surface, and then the interior state of the rock definitely is independent of the exterior environment conditioned on said boundary, right?
If I try very hard to extract a meaning out of the quoted paragraph, I might guess (with very low confidence) that what it's trying to say is that a rock's internal state has a one-to-one relation with the external forces or stimuli that transmit information through its surface, but in this case a black hole passes the test, in that the black hole's internal state definitely is not one-to-one with the information entering through its event horizon. In other words, if my very low-confidence understanding of the quoted paragraph is correct, then black holes are classified as agents under this definition.
(This test is of interest to me because black holes tend to pass other, potentially related definitions of agency, such as agency as optimization, agency as compression, etc. I'm not sure whether this says that something is off with our intuitive notion of agency, that something is off with our attempts at rigorously defining it, or simply that black holes are a special kind of "physical agent" built in-to the laws of physics.)
How is a Bayesian agent supposed to modify priors except by updating on the basis of evidence?
They're not! But humans aren't ideal Bayesians, and it's entirely possible for them to update in a way that does change their priors (encoded by intuitions) moving forward. In particular, the difference between having updated one's intuitive prior, and keeping the intuitive prior around but also keeping track of a different, consciously held posterior, is that the former is vastly less likely to "de-update", because the evidence that went into the update isn't kept around in a form that subjects it to (potential) refutation.
(IIRC, E.T. Jaynes talks about this distinction in Chapter 18 of Probability Theory: The Logic of Science, and he models it by introducing something he calls an A_p distribution. His exposition of this idea is uncharacteristically unclear, and his A_p distribution looks basically like a beta distribution with specific values for α and β, but it does seem to capture the distinction I see between "intuitive" updating versus "conscious" updating.)
There's also a failure mode of focusing on "which arguments are the best" instead of "what is actually true". I don't understand this failure mode very well, except that I've seen myself and others fall into it. Falling into it looks like focusing a lot on specific arguments, and spending a lot of time working out what was meant by the words, rather than feeling comfortable adjusting arguments to fit better into your own ontology and to fit better with your own beliefs.
My sense is that this is because different people have different intuitive priors, and process arguments (mostly) as a kind of Bayesian evidence that updates those priors, rather than modifying the priors (i.e. intuitions) directly.
Eliezer in particular strikes me as having an intuitive prior for AI alignment outcomes that looks very similar to priors for tasks like e.g. writing bug-free software on the first try, assessing the likelihood that a given plan will play out as envisioned, correctly compensating for optimism bias, etc. which is what gives rise to posts concerning concepts like security mindset.
Other people don't share this intuitive prior, and so have to be argued into it. To such people, the reliability of the arguments in question is actually critical, because if those arguments turn out to have holes, that reverts the downstream updates and restores the original intuitive prior, whatever it looked like—kind of like a souped up version of the burden of proof concept, where the initial placement of that burden is determined entirely via the intuitive judgement of the individual.
This also seems related to why different people seem to naturally gravitate towards either conjunctive or disjunctive models of catastrophic outcomes from AI misalignment: the conjunctive impulse stems from an intuition that AI catastrophe is a priori unlikely, and so a bunch of different claims have to hold simultaneously in order to force a large enough update, whereas the disjunctive impulse stems from the notion that any given low-level claim need not be on particularly firm ground, because the high-level thesis of AI catastrophe robustly manifests via different but converging lines of reasoning.
See also: the focus on coherence, where some people place great importance on the question of whether VNM or other coherence theorems show what Eliezer et al. purport they show about superintelligent agents, versus the competing model wherein none of these individual theorems are important in their particulars, so much as the direction they seem to point, hinting at the concept of what idealized behavior with respect to non-gerrymandered physical resources ought to look like.
I think the real question, then, is where these differences in intuition come from, and unfortunately the answer might have to do a lot with people's backgrounds, and the habits and heuristics they picked up from said backgrounds—something quite difficult to get at via specific, concrete argumentation.
Can we not speak of apparent coherence relative to a particular standpoint? If a given system seems to be behaving in such a way that you personally can't see a way to construct for it a Dutch book, a series of interactions with it such that energy/negentropy/resources can be extracted from it and accrue to you, that makes the system inexploitable with respect to you, and therefore at least as coherent as you are. The closer to maximal coherence a given system is, the less it will visibly depart from the appearance of coherent behavior, and hence utility function maximization; the fact that various quibbles can be made about various coherence theorems does not seem to me to negate this conclusion.
Humans are more coherent than mice, and there are activities and processes which individual humans occasionally undergo in order to emerge more coherent than they did going in; in some sense this is the way it has to be, in any universe where (1) the initial conditions don't start out giving you fully coherent embodied agents, and (2) physics requires continuity of physical processes, so that fully formed coherent embodied agents can't spring into existence where there previously were none; there must be some pathway from incoherent, inanimate matter from which energy may be freely extracted, to highly organized configurations of matter from which energy may be extracted only with great difficulty, if it can be extracted at all.
If you expect the endpoint of that process to not fully accord with the von Neumann-Morgenstein axioms, because somebody once challenged the completeness axiom, independence axiom, continuity axiom, etc., the question still remains as to whether departures from those axioms will give rise to exploitable holes in the behavior of such systems, from the perspective of much weaker agents such as ourselves. And if the answer is "no", then it seems to me the search for ways to make a weaker, less coherent agent into a stronger, more coherent agent is well-motivated, and necessary—an appeal to consequences in a certain sense, yes, but one that I endorse!
I seem to recall hearing a phrase I liked, which appears to concisely summarize the concern as: "There's no canonical way to scale me up."
Does that sound right to you?
Well, if we're following standard ML best practices, we have a train set, a dev set, and a test set. The purpose of the dev set is to check and ensure that things are generalizing properly. If they aren't generalizing properly, we tweak various hyperparameters of the model and retrain until they do generalize properly on the dev set. Then we do a final check on the test set to ensure we didn't overfit the dev set. If you forgot or never learned this stuff, I highly recommend brushing up on it.
(Just to be clear: yes, I know what training and test sets are, as well as dev sets/validation sets. You might notice I actually used the phrase "validation set" in my earlier reply to you, so it's not a matter of guessing someone's password—I'm quite familiar with these concepts, as someone who's implemented ML models myself.)
Generally speaking, training, validation, and test datasets are all sourced the same way—in fact, sometimes they're literally sourced from the same dataset, and the delineation between train/dev/test is introduced during training itself, by arbitrarily carving up the original dataset into smaller sets of appropriate size. This may capture the idea of "IID" you seem to appeal to elsewhere in your comment—that it's possible to test the model's generalization performance on some held-out subset of data from the same source(s) it was trained on.
In ML terms, what the thought experiment points to is a form of underlying distributional shift, one that isn't (and can't be) captured by "IID" validation or test datasets. The QFT model in particular highlights the extent to which your training process, however broad or inclusive from a parochial human standpoint, contains many incidental distributional correlates to your training signal which (1) exist in all of your data, including any you might hope to rely on to validate your model's generalization performance, and (2) cease to correlate off-distribution, during deployment.
This can be caused by what you call "omniscience", but it need not; there are other, more plausible distributional differences that might be picked up on by other kinds of models. But QFT is (as far as our current understanding of physics goes) very close to the base ontology of our universe, and so what is inferrable using QFT is naturally going to be very different from what is inferrable using some other (less powerful) ontology. QFT is a very powerful ontology!
If you want to call that "omniscience", you can, although note that strictly speaking the model is still just working from inferences from training data. It's just that, if you feed enough data to a model that can hold entire swaths of the physical universe inside of its metaphorical "head", pretty soon hypotheses that involve the actual state of that universe will begin to outperform hypotheses that don't, and which instead use some kind of lossy approximation of that state involving intermediary concepts like "intent", "belief", "agent", "subjective state", etc.
In principle we could construct a test set or dev set either before or after the model has been trained. It shouldn't make a difference under normal circumstances. It sounds like maybe you're discussing a scenario where the model has achieved a level of omniscience, and it does fine on data that was available during its training, because it's able to read off of an omniscient world-model. But then it fails on data generated in the future, because the translation method for its omniscient world-model only works on artifacts that were present during training. Basically, the time at which the data was generated could constitute a hidden and unexpected source of distribution shift. Does that summarize the core concern?
You're close; I'd say the concern is slightly worse than that. It's that the "future data" never actually comes into existence, at any point. So the source of distributional shift isn't just "the data is generated at the wrong time", it's "the data never gets externally generated to begin with, and you (the model) have to work with predictions of what the data counterfactually would have been, had it been generated".
(This would be the case e.g. with any concept of "human approval" that came from a literal physical human or group of humans during training, and not after the system was deployed "in the wild".)
In any case, I would argue that "accidental omniscience" characterizes the problem better than "alien abstractions". As before, you can imagine an accidentally-omniscient model that uses vanilla abstractions, or a non-omniscient model that uses alien ones.
The problem is that "vanilla" abstractions are not the most predictively useful possible abstractions, if you've got access to better ones. And models whose ambient hypothesis space is broad enough to include better abstractions (from the standpoint of predictive accuracy) will gravitate towards those, as is incentivized by the outer form of the training task. QFT is the extreme example of a "better abstraction", but in principle (if the natural abstraction hypothesis fails) there will be all sorts and shapes of abstractions, and some of them will be available to us, and some of them will be available to the model, and these sets will not fully overlap—which is a concern in worlds where different abstractions lead to different generalization properties.
Ah, yeah, I can see how I might've been unclear there. I was implicitly taking CoT into account when I talked about the "base distribution" of the model's outputs, as it's essentially ubiquitous across these kinds of scaffolding projects. I agree that if you take a non-recurrent model's O(1) output and equip it with a form of recurrent state that you permit to continue for O(n) iterations, that will produce a qualitatively different distribution of outputs than the O(1) distribution.
In that sense, I readily admit CoT into the class of improvements I earlier characterized as "shifted distribution". I just don't think this gets you very far in terms of the overarching problem, since the recurrent O(n) distribution is the one whose output I find unimpressive, and the method that was used to obtain it from the (even less impressive) O(1) distribution is a one-time trick.[1]
I also agree that another way to obtain a higher quality output distribution is to load relevant context from elsewhere. This once more seems to me like something of a red herring when it comes to the overarching question of how to get an LLM to produce human- or superhuman-level research; you can load its context with research humans have already done, but this is again a one-time trick, and not one that seems like it would enable novel research built atop the human-written research unless the base model possesses a baseline level of creativity and insight, etc.[2]
If you don't already share (or at least understand) a good chunk of my intuitions here, the above probably sounds at least a little like I'm carving out special exceptions: conceding each point individually, while maintaining that they bear little on my core thesis. To address that, let me attempt to put a finger on some of the core intuitions I'm bringing to the table:
On my model of (good) scientific research de novo, a lot of key cognitive work occurs during what you might call "generation" and "synthesis", where "generation" involves coming up with hypotheses that merit testing, picking the most promising of those, and designing a robust experiment that sheds insight; "synthesis" then consists of interpreting the experimental results so as to figure out the right takeaway (which very rarely ought to look like "we confirmed/disconfirmed the starting hypothesis").
Neither of these steps are easily transmissible, since they hinge very tightly on a given individual's research ability and intellectual "taste"; and neither of them tend to end up very well described in the writeups and papers that are released afterwards. This is hard stuff even for very bright humans, which implies to me that it requires a very high quality of thought to manage consistently. And it's these steps that I don't think scaffolding can help much with; I think the model has to be smart enough, at baseline, that its landscape of cognitive reachability contains these kinds of insights, before they can be elicited via an external method like scaffolding.[3]
I'm not sure whether you could theoretically obtain greater benefits from allowing more than O(n) iterations, but either way you'd start to bump up against context window limitations fairly quickly. ↩︎
Consider the extreme case where we prompt the model with (among other things) a fully fleshed out solution to the AI alignment problem, before asking it to propose a workable solution to the AI alignment problem; it seems clear enough that in this case, almost all of the relevant cognitive work happened before the model even received its prompt. ↩︎
I'm uncertain-leaning-yes on the question of whether you can get to a sufficiently "smart" base model via mere continued scaling of parameter count and data size; but that connects back to the original topic of whether said "smart" model would need to be capable of goal-directed thinking, on which I think I agree with Jeremy that it would; much of my model of good de novo research, described above, seems to me to draw on the same capabilities that characterize general-purpose goal-direction. ↩︎