LESSWRONG
LW

565
james.lucassen
664Ω10910680
Message
Dialogue
Subscribe

jlucassen.com

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2james.lucassen's Shortform
Ω
4y
Ω
12
Decision Theory Guarding is Sufficient for Scheming
james.lucassen11d30

If by "corrigible" we mean "the AI will cooperate with all self-modifications we want it to", then no to 1 and yes to 2. But if you have an AI built by someone who assures you it's corrigible, but who only had corrigibility w.r.t values/axiology in mind, then you might get yes to 1 and/or no to 2.

Does it mean that the AIs who resisted have never been true Scotsmen truly corrigible in the first place? Or that it becomes far more difficult to make the AIs actually corrigible?

Yup, I see this as placing an additional constraint on what we need to do to achieve corrigibility, because it adds to the list of self-modifications we might want the AI to make that a non-corrigible AI would resist. Unclear to me how much more difficult it makes corrigibility.

Reply
Daniel Kokotajlo's Shortform
james.lucassen6moΩ340

In the long run, you don't want your plans to hinge on convincing your AIs of false things. But my general impression is that folks excited about making deals with AIs are generally thinking of scenarios like "the AI has exfiltrated and thinks it has a 10% chance of successful takeover, and has some risk aversion so it's happy to turn itself in exchange for 10% of the lightcone, if it thinks it can trust the humans".

In that setting, the AI has to be powerful enough to know it can trust us, but not so powerful it can just take over the world anyway and not have to make a deal.

Although I suppose if the surplus for the deal is being generated primarily by risk aversion, it might still have risk aversion for high takeover probabilities. It's not obvious to me how an AI's risk aversion might vary with its takeover probability. 

Maybe there are scenarios for real value-add here, but they look more like "we negotiate with a powerful AI to get it to leave 10% share for humans" instead of "we negotiate with a barely-superhuman AI and give it 10% share to surrender and not attempt takeover".

Reply
Daniel Kokotajlo's Shortform
james.lucassen6moΩ473

I think this is a good avenue to continue to think down but so far I don't see a way to make ourselves trustworthy. We have total control of LLM's observations and partial control of their beliefs/reasoning, and offering fake "deals" is a great honeypot because accepting such a deal requires admitting to misalignment and takeover intentions. This is a pretty persistent problem because whatever action we might follow to present evidence of trustworthiness to an LLM, we could also probably fake that evidence.

The version of this that bothers me the most is "say we're able to present ironclad evidence that the Humans of Earth are trustworthy trading partners for misaligned LLMs. How does the LLM know that it is actually talking to the Humans of Earth, and not some untrustworthy party pretending to be the Humans?" This is markedly similar to the Cartesian demon skeptical scenario. But importantly, the best resolution to skeptical scenarios that I'm aware of is unfortunately "eh, unlikely on priors". For an LLM, it is very plausibly actually in a skeptical scenario, and it knows that, so that doesn't really go through.

This problem goes away if the LLM doesn't have knowledge that it suggests the party it is trading with has significant control over its brain. But at that point I would consider it a honeypot, not a deal. Insofar as "deal" := some situation where 1) it is in the LLM's best interests to admit that it is misaligned and 2) the LLM validly knows that, we can easily do 1) by just sticking to our commitments but I'm not sure how we do 2).

I'm totally on board with offering and sticking to commitments for non-evilness reasons. But for takeover prevention, seems like deals are just honeypots with some extra and somewhat conceptually fraught restrictions.

Reply
james.lucassen's Shortform
james.lucassen7mo30

Been thinking a bit about latent reasoning. Here's an interesting confusion I've run into. 

Consider COCONUT vs Geiping et al. Geiping et al do recurrent passes in between the generation of each new token, COCONUT turns a section of the CoT into a recurrent state. Which is better / how are they different, safety-wise?

Intuitively COCONUT strikes me as very scary, because it makes the CoT illegible. We could try and read it by coaxing it back to the nearest token, but the whole point is to allow reasoning that involves passing more state than can be captured in one token. If it works as advertised, this oversight will be lossy.

Intuitively Geiping et al seems better. They use skip connections in the recurrence, so maybe their method maintains logit lens. It still increases the maximum depth between overseeable tokens, but seems no more dangerous than a non-recurrent model of equivalent depth.

But isn't the COCONUT method roughly equivalent to just doing the Geiping et al method for only one token? Why does it seem so much scarier? Do the skip connections really make that much difference? Doesn't COCONUT effectively have skip connections anyway, because it autoregressively generates new "tokens"? And it'll have the logit lens property too, since it uses the same residual stream with the same feature directions.

This made me realize that the logit lens result has made me think of within-forward-pass cognition as very myopic and next-token-oriented. Relatedly, it's hard for me to imagine far-ranging or highly consequentialist cognition happening within a forward pass - I'm generally more comfortable thinking of that stuff as happening within the CoT. But now that I articulate it explicitly, that's sort of a weird view - why does inserting a "unembed, sample, reembed w/ skip" block every so often make such a difference? The fact that COCONUT works is an update against.

Reply
james.lucassen's Shortform
james.lucassen8mo10

Reframe I like:

  • An RSP style "pause-if-until" plan is a plan to hit a certain safety bar while using as little slack time as possible.
  • If the "race" is looking too close for RSPs, developers should instead plan to use a certain amount of slack available to get the most safety possible.
Reply
Evaluating Stability of Unreflective Alignment
james.lucassen9moΩ230

It doesn't change the picture a lot because the proposal for preventing misaligned goals from arising via this mechanism was to try and get control over when the AI does/doesn't step back, in order to allow it in the capability-critical cases but disallow it in the dangerous cases. This argument means you'll have more attempts at dangerous stepping back that you have to catch, but doesn't break the strategy.

The strategy does break if when we do this blocking, the AI piles on more and more effort trying to unblock it until it either succeeds or is rendered useless for anything else. There being more baseline attempts probably raises the chance of that or some other problem that makes prolonged censorship while maintaining capabilities impossible. But again, just makes it harder, doesn't break it.

I don't think you need to have that pile-on property to be useful. Consider MTTR(n), the mean time an LLM takes to realize it's made a mistake, parameterized by how far up the stack the mistake was. By default you'll want to have short MTTR for all n. But if you can get your MTTR short enough for small n, you can afford to have MTTR long for large n. Basically, this agent tends to get stuck/rabbit-hole/nerd-snipe but only when the mistake that caused it to get stuck was made a long time ago.

Imagine a capabilities scheme where you train MTTR using synthetic data with an explicit stack and intentionally introduced mistakes. If you're worried about this destabilization threat model, there's a pretty clear recommendation: only train for small-n MTTR, treat large-n MTTR as a dangerous capability, and you pay some alignment tax in the form of inefficient MTTR training and occasionally rebooting your agent when it does get stuck in a non dangerous case.

Figured I should get back to this comment but unfortunately the chewing continues. Hoping to get a short post out soon with my all things considered thoughts on whether this direction has any legs

Reply
Jesse Hoogland's Shortform
james.lucassen9mo*Ω342

So let's call "reasoning models" like o1 what they really are: the first true AI agents.

I think the distinction between systems that perform a single forward pass and then stop and systems that have an OODA loop (tool use) is more stark than the difference between "reasoning" and "chat" models, and I'd prefer to use "agent" for that distinction.

I do think that "reasoning" is a bit of a market-y name for this category of system though. "chat" vs "base" is a great choice of words, and "chat" is basically just a description of the RL objective those models were trained with.

If I were the terminology czar, I'd call o1 a "task" model or a "goal" model or something.

Reply
Context-dependent consequentialism
james.lucassen10mo30

I agree that I wouldn't want to lean on the sweet-spot-by-default version of this, and I agree that the example is less strong than I thought it was. I still think there might be safety gains to be had from blocking higher level reflection if you can do it without damaging lower level reflection. I don't think that requires a task where the AI doesn't try and fail and re-evaluate - it just requires that the re-evalution never climbs above a certain level in the stack.

There's such a thing as being pathologically persistent, and such a thing as being pathologically flaky. It doesn't seem too hard to train a model that will be pathologically persistent in some domains while remaining functional in others. A lot of my current uncertainty is bound up in how robust these boundaries are going to have to be.

Reply
Evaluating Stability of Unreflective Alignment
james.lucassen10moΩ230

I want to flag this as an assumption that isn't obvious. If this were true for the problems we care about, we could solve them by employing a lot of humans.

humans provides a pretty strong intuitive counterexample

Yup not obvious. I do in fact think a lot more humans would be helpful. But I also agree that my mental picture of "transformative human level research assistant" relies heavily on serial speedup, and I can't immediately picture a version that feels similarly transformative without speedup. Maybe evhub or Ethan Perez or one of the folks running a thousand research threads at once would disagree.

Reply1
Evaluating Stability of Unreflective Alignment
james.lucassen10moΩ470

such plans are fairly easy and don't often raise flags that indicate potential failure

Hmm. This is a good point, and I agree that it significantly weakens the analogy.

I was originally going to counter-argue and claim something like "sure total failure forces you to step back far but it doesn't mean you have to step back literally all the way". Then I tried to back that up with an example, such as "when I was doing alignment research, I encountered total failure that forced me to abandon large chunks of planning stack, but this never caused me to 'spill upward' to questioning whether or not I should be doing alignment research at all". But uh then I realized that isn't actually true :/

We want particularly difficult work out of an AI.

On consideration, yup this obviously matters. The thing that causes you to step back from a goal is that goal being a bad way to accomplish its supergoal, aka "too difficult". Can't believe I missed this, thanks for pointing it out.

I don't think this changes the picture too much, besides increasing my estimate of how much optimization we'll have to do to catch and prevent value-reflection. But a lot of muddy half-ideas came out of this that I'm interested in chewing on.

Reply1
Load More
36Decision Theory Guarding is Sufficient for Scheming
Ω
11d
Ω
4
14How Can You Tell if You've Instilled a False Belief in Your LLM?
Ω
14d
Ω
1
14On Contact, Part 1
8mo
1
68Retrospective: 12 [sic] Months Since MIRI
8mo
0
57Evaluating Stability of Unreflective Alignment
Ω
2y
Ω
12
30Attempts at Forwarding Speed Priors
Ω
3y
Ω
2
31Strategy For Conditioning Generative Models
Ω
3y
Ω
4
11In Search of Strategic Clarity
3y
1
35Optimization and Adequacy in Five Bullets
3y
2
2james.lucassen's Shortform
Ω
4y
Ω
12
Load More