Wiki Contributions

Comments

porby4mo40

I sometimes post experiment ideas on my shortform. If you see one that seems exciting and you want to try it, great! Please send me a message so we can coordinate and avoid doing redundant work.

porby2mo30

Yup, exactly the same experience here.

porby3mo40

Has there been any work on the scaling laws of out-of-distribution capability/behavior decay?

A simple example:

  1. Simultaneously train task A and task B for N steps.
  2. Stop training task B, but continue to evaluate the performance of both A and B.
  3. Observe how rapidly task B performance degrades.

Repeat across scale and regularization strategies.

Would be nice to also investigate different task types. For example, tasks with varying degrees of implied overlap in underlying mechanisms (like #2).

I've previously done some of these experiments privately, but not with nearly the compute necessary for an interesting result.

The sleeper agents paper reminded me of it. I would love to see what happens on a closer-to-frontier model that's intentionally backdoored, and then subjected to continued pretraining. Can a backdoor persist for another trillion tokens of nonadversarial-but-extremely-broad training? Does that vary across scale etc?

I'd also like to intentionally find the circumstances that maximize the persistence of out of distribution capabilities not implied by the current training distribution.

Seems like identifying a robust trend here would have pretty important Implications, whichever direction it points.

porby3mo40

A further extension and elaboration on one of the experiments in the linkpost:
Pitting execution fine-tuning against input fine-tuning also provides a path to measuring the strength of soft prompts in eliciting target behaviors. If execution fine-tuning "wins" and manages to produce a behavior in some part of input space that soft prompts cannot elicit, it would be a major blow to the idea that soft prompts are useful for dangerous evaluations.

On the flip side, if ensembles of large soft prompts with some hyperparameter tuning always win (e.g. execution fine tuning cannot introduce any behaviors accessible by any region of input space without soft prompts also eliciting it), then they're a more trustworthy evaluation in practice.

porby3mo72

Having escaped infinite overtime associated with getting the paper done, I'm now going back and catching up on some stuff I couldn't dive into before.

Going through the sleeper agents paper, it appears that one path—adversarially eliciting candidate backdoor behavior—is hampered by the weakness of the elicitation process. Or in other words, there exist easily accessible input conditions that trigger unwanted behavior that LLM-driven adversarial training can't identify.

I alluded to this in the paper linkpost, but soft prompts are a very simple and very strong option for this. There remains a difficulty in figuring out what unwanted behavior to adversarially elicit, but this is an area that has a lot of low hanging fruit.

I'd also interested in whether how more brute force interventions, like autoregressively detuning a backdoored model with a large soft prompt for a very large dataset (or an adversarially chosen anti-backdoor dataset) compares to the other SFT/RL interventions. Activation steering, too; I'm currently guessing activation-based interventions are the cheapest for this sort of thing.

porby3mo50

By the way: I just got into San Francisco for EAG, so if anyone's around and wants to chat, feel free to get in touch on swapcard (or if you're not in the conference, perhaps a DM)! I fly out on the 8th.

It's been over a year since the original post and 7 months since the openphil revision.

A top level summary:

  1. My estimates for timelines are pretty much the same as they were.
  2. My P(doom) has gone down overall (to about 30%), and the nature of the doom has shifted (misuse, broadly construed, dominates).

And, while I don't think this is the most surprising outcome nor the most critical detail, it's probably worth pointing out some context. From NVIDIA:

In two quarters, from Q1 FY24 to Q3 FY24, datacenter revenues went from $4.28B to $14.51B.

From the post:

In 3 years, if NVIDIA's production increases another 5x ...

Revenue isn't a perfect proxy for shipped compute, but I think it's safe to say we've entered a period of extreme interest in compute acquisition. "5x" in 3 years seems conservative.[1] I doubt the B100 is going to slow this curve down, and competitors aren't idle: AMD's MI300X is within striking distance, and even Intel's Gaudi 2 has promising results.

Chip manufacturing remains a bottleneck, but it's a bottleneck that's widening as fast as it can to catch up to absurd demand. It may still be bottlenecked in 5 years, but not at the same level of production.

On the difficulty of intelligence

I'm torn about the "too much intelligence within bounds" stuff. On one hand, I think it points towards the most important batch of insights in the post, but on the other hand, it ends with an unsatisfying "there's more important stuff here! I can't talk about it but trust me bro!"

I'm not sure what to do about this. The best arguments and evidence are things that fall into the bucket of "probably don't talk about this in public out of an abundance of caution." It's not one weird trick to explode the world, but it's not completely benign either.

Continued research and private conversations haven't made me less concerned. I do know there are some other people who are worried about similar things, but it's unclear how widely understood it is, or whether someone has a strong argument against it that I don't know about.

So, while unsatisfying, I'd still assert that there are highly accessible paths to broadly superhuman capability on short timescales. Little of my forecast's variance arises from uncertainty on this point; it's mostly a question of when certain things are invented, adopted, and then deployed at sufficient scale. Sequential human effort is a big chunk; there are video games that took less time to build than the gap between this post's original publication date and its median estimate of 2030.

On doom

When originally writing this, my model of how capabilities would develop was far less defined, and my doom-model was necessarily more generic.

A brief summary would be:

  1. We have a means of reaching extreme levels of capability without necessarily exhibiting preferences over external world states. You can elicit such preferences, but a random output sequence from the pretrained version of GPT-N (assuming the requisite architectural similarities) has no realistic chance of being a strong optimizer with respect to world states. The model itself remains a strong optimizer, just for something that doesn't route through the world.
  2. It's remarkably easy to elicit this form of extreme capability to guide itself. This isn't some incidental detail; it arises from the core process that the model learned to implement.
  3. That core process is learned reliably because the training process that yielded it leaves no room for anything else. It's not a sparse/distant reward target; it is a profoundly constraining and informative target.

I've written more on the nice properties of some of these architectures elsewhere. I'm in the process of writing up a complementary post on why I think these properties (and using them properly) are an attractor in capabilities, and further, why I think some of the x-riskiest forms of optimization process are actively repulsive for capabilities. This requires some justification, but alas, the post will have to wait some number of weeks in the queue behind a research project.

The source of the doom-update is the correction of some hidden assumptions in my doom model. My original model was downstream of agent foundations-y models, but naive. It followed a process: set up a framework, make internally coherent arguments within that framework, observe highly concerning results, then neglect to notice where the framework didn't apply.

Specifically, some of the arguments feeding into my doom model were covertly replacing instances of optimizers with hypercomputer-based optimizers[2], because hey, once you've got an optimizer and you don't know any bounds on it, you probably shouldn't assume it'll just turn out convenient for you, and hypercomputer-optimizers are the least convenient.

For example, this part:

Is that enough to start deeply modeling internal agents and other phenomena concerning for safety?

And this part:

AGI probably isn't going to suffer from these issues as much. Building an oracle is probably still worth it to a company even if it takes 10 seconds for it to respond, and it's still worth it if you have to double check its answers (up until oops dead, anyway).

With no justification, I imported deceptive mesaoptimizers and other "unbound" threats. Under the earlier model, this seemed natural.

I now think there are bounds on pretty much all relevant optimizing processes up and down the stack from the structure of learned mesaoptimizers to the whole capability-seeking industry. Those bounds necessarily chop off large chunks of optimizer-derived doom; many outcomes that previously seemed convergent to me now seem extremely hard to access.

As a result, "technical safety failure causes existential catastrophe" dropped in probability by around 75-90%, down to something like 5%-ish.[3]

I'm still not sure how to navigate a world with lots of extremely strong AIs. As capability increases, outcome variance increases. With no mitigations, more and more organizations (or, eventually, individuals) will have access to destabilizing systems, and they would amplify any hostile competitive dynamics.[4] The "pivotal act" frame gets imported even if none of the systems are independently dangerous.

I've got hope that my expected path of capabilities opens the door for more incremental interventions, but there's a reason my total P(doom) hasn't yet dropped much below 30%.

  1. ^

    The reason why this isn't an update for me is that I was being deliberately conservative at the time.

  2. ^

    A hypercomputer-empowered optimizer can jump to the global optimum with brute force. There isn't some mild greedy search to be incrementally shaped; if your specification is even slightly wrong in a sufficiently complex space, the natural and default result of a hypercomputer-optimizer is infinite cosmic horror.

  3. ^

    It's sometimes tricky to draw a line between "oh this was a technical alignment failure that yielded an AI-derived catastrophe, as opposed to someone using it wrong," so it's hard to pin down the constituent probabilities.

  4. ^

    While strong AI introduces all sorts of new threats, its generality amplifies "conventional" threats like war, nukes, and biorisk, too. This could create civilizational problems even before a single AI could, in principle, disempower humanity.

porby4mo142

Mine:

My answer to "If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived (or better)" is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.

Answer by porbyDec 13, 202330

I'm accumulating a to-do list of experiments much faster than my ability to complete them:

  1. Characterizing fine-tuning effects with feature dictionaries
  2. Toy-scale automated neural network decompilation (difficult to scale)
  3. Trying to understand evolution of internal representational features across blocks by throwing constraints at it 
  4. Using soft prompts as a proxy measure of informational distance between models/conditions and behaviors (see note below)
  5. Prompt retrodiction for interpreting fine tuning, with more difficult extension for activation matching
  6. Miscellaneous bunch of experiments

If you wanted to take one of these and run with it or a variant, I wouldn't mind!

The unifying theme behind many of these is goal agnosticism: understanding it, verifying it, maintaining it, and using it.

Note: I've already started some of these experiments, and I will very like start others soon. If you (or anyone reading this, for that matter) sees something they'd like to try, we should chat to avoid doing redundant work. I currently expect to focus on #4 for the next handful of weeks, so that one is probably at the highest risk of redundancy.

Further note: I haven't done a deep dive on all relevant literature; it could be that some of these have already been done somewhere!  (If anyone happens to know of prior art for any of these, please let me know.)

porby4mo40

Retrodicting prompts can be useful for interpretability when dealing with conditions that aren't natively human readable (like implicit conditions induced by activation steering, or optimized conditions from soft prompts). Take an observed completion and generate the prompt that created it.

What does a prompt retrodictor look like?

Generating a large training set of soft prompts to directly reverse would be expensive. Fortunately, there's nothing special in principle about soft prompts with regard to their impact on conditioning predictions.

Just take large traditional text datasets. Feed the model a chunk of the string. Train on the prediction of tokens before the chunk.

Two obvious approaches:

  1. Special case of infilling. Stick to a purely autoregressive training mode, but train the model to fill a gap autoregressively. In other words, the sequence would be: 
    [Prefix token][Prefix sequence][Suffix token][Suffix sequence][Middle token][Middle sequence][Termination token]
    Or, as the paper points out: 
    [Suffix token][Suffix sequence][Prefix token][Prefix sequence][Middle sequence][Termination token] Nothing stopping the prefix sequence from having zero length.
  2. Could also specialize training for just previous prediction: 
    [Prompt chunk]["Now predict the previous" token][Predicted previous chunk, in reverse]

But we don't just want some plausible previous prompts, we want the ones that most precisely match the effect on the suffix's activations.

This is trickier. Specifying the optimization target is easy enough: retrodict a prompt that minimizes MSE((activations | sourcePrompt), (activations | retrodictedPrompt)), where (activations | sourcePrompt) are provided. Transforming that into a reward for RL is one option. Collapsing the outout distribution into a token is a problem; there's no way to directly propagate the gradient through that collapse and into the original distribution. Without that differentiable connection, analytically computing gradients for the other token options becomes expensive and turns into a question of sampling strategies. Maybe something clever floating around.

Note that retrodicting with an activation objective has some downsides:

  1. If the retrodictor's the same model as the predictor, there are some weird feedback loops. The activations become a moving target.
  2. Targeting activations makes the retrodictor model-specific. Without targeting activations, the retrodictor could work for any model in principle.
  3. While the outputs remain constrained to token distributions, the natural endpoint for retrodiction on activations is not necessarily coherent natural language. Adversarially optimizing for tokens which produce a particular activation may go weird places. It'll likely still have some kind of interpretable "vibe," assuming the model isn't too aggressively exploitable.

This class of experiment is expensive for natural language models. I'm not sure how interesting it is at scales realistically trainable on a couple of 4090s.

Load More