All else equal, I think minimizing model entropy is desirable (i.e. the number of weights). In other words, you want to keep the size of the model class small.
Roughly, alignment could be viewed as constructing a list of constraints or criteria that a model must satisfy in order to be considered safe. As the size of the model class grows, more models will satisfy any particular constraint. The complexity of the constraints likely needs to grow along with the complexity of the model class.
If a large number of models satisfy all the constraints, there is a large amount of behavior that is unconstrained and unaccounted for. We've decided that we don't care... (read more)
He's talking about "modern AI training" i.e. "giant, inscrutable matrices of floating-point numbers". My impression is that he thinks it is possible (but extremely difficult) to build aligned ASI, but nearly impossible to bootstrap modern DL systems to alignment.
In a causal-masked transformer, attention layers can query the previous layers' activations from any column in the context window. Gradients flow through the attention connections, so each previous layer is optimized not just to improve prediction accuracy for the next token, but also to produce values that are useful for future columns to attend to when predicting their token.
I think this is part of the reason why prompt engineering is so fiddly.
GPT essentially does a limited form of branch prediction and speculative execution. It guesses (based on the tokens evaluated so far) what pre-computation will be useful for future token predictions. If its guess is wrong, the pre-computation will be useless.
Prompts lets... (read 353 more words →)
Yeah I found it pretty easy to "jailbreak" too. For example, here is what appears to be the core web server API code.
I didn't really do anything special to get it. I just started by asking it to list the files in the home directory and went from there.
For GPT-style LLMs, is it possible to prove statements like the following?
Choose some tokens , and a fixed :
There does not exist a prefix of tokens such that
More generally, is it possible to prove interesting universal statements? Sure, you can brute force it for LLMs with a finite context window but that's both infeasible and boring. And you can specifically construct contrived LLMs where this is possible but that's also boring.
I suspect that it's not possible/practical in general because the LLM can do arbitrary computation to predict the next token, but maybe I'm wrong.
Direct self-improvement (i.e. rewriting itself at the cognitive level) does seem much, much harder with deep learning systems than with the sort of systems Eliezer originally focused on.
In DL, there is no distinction between "code" and "data"; it's all messily packed together in the weights. Classic RSI relies on the ability to improve and reason about the code (relatively simple) without needing to consider the data (irreducibly complicated).
Any verification that a change to the weights/architecture will preserve a particular non-trivial property (e.g. avoiding value drift) is likely to be commensurate in complexity to the complexity of the weights. So... very complex.
The safest "self-improvement" changes probably look more like performance/parallelization improvements than "cognitive"... (read more)
For people that are just reading cfoster0's comment and then skipping a read of the post, I recommend you still take a look. I think his comment is a bit unfair and seems more like a statement of frustration with LLM analysis in general than commentary on this post in particular.
This is awesome! So far, I'm not seeing much engagement (in the comments) with most of the new ideas in this post, but I suspect this is due to its length and sprawling nature rather than potential interest. This post is a solid start on creating a common vocabulary and framework for thinking about LLMs.
I like the work you did on formalizing LLMs as a stochastic process, but I suspect that some of the exploration of the consequences is more distracting than helpful in an overview like this. In particular: 4.B, 4.C, 4.D, 4.E, 5.B, and 5.C. These results are mostly an enumeration of basic properties of finite-state Markov Chains, rather than... (read more)
I've not really seen it written up, but it's conceptually similar to the classic ML ideas of overfitting, over-parameterization, under-specification, and generalization. If you imagine your alignment constraints as a kind of training data for the model then those ideas fall into place nicely.
After some searching, the most relevant thing I've found is Section 9 (page 44) of Interpretable machine learning: Fundamental principles and 10 grand challenges. Larger model classes often have bigger Rashomon sets and different models in the same Rashomon set can behave very differently.