LESSWRONG
LW

Cody Rushing
518Ω170511
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Four ways learning Econ makes people dumber re: future AI
Cody Rushing9dΩ592

Regardless of the whether these anti-pedagogy's are correct, I'm confused about why you think you've shown that learning econ made the economists dumber. It seems like the majority of your tweets you linked, excluding maybe Tweet 4, are actually just the economists discussing narrow AI and failing to consider general intelligence?

If you meant to say something like 'econ pedagogy makes it hard for economists to view AGI as something that could actually be intelligent in a way similar to humans', then I may be more inclined to agree with you.

Reply
The (Unofficial) Rationality: A-Z Anki Deck
Cody Rushing3mo51

The deck appears to have no notes in it

Reply
Shortform
Cody Rushing5mo-10

My strong downvote just gave +3

Reply
The Case Against AI Control Research
Cody Rushing7mo21

Hmm, when I imagine "Scheming AI that is not easy to shut down with concerted nation-state effort, are attacking you with bioweapons, but are weak enough such that you can bargain/negotiate with them" I can imagine this outcome inspiring a lot more caution relative to many other worlds where control techniques work well but we can't get any convincing demos/evidence to inspire caution (especially if control techniques inspire overconfidence). 

But the 'is currently working on becoming more powerful' part of your statement does carry a lot of weight.

Reply
The Case Against AI Control Research
Cody Rushing7mo42

Control research exclusively cares about intentional deception/scheming; it does not aim to solve any other failure mode. 

(nitpick, doesn't address main point of article) I think this is incomplete. Though control research does indeed care a lot about scheming, control can be used more broadly to handle any worst-case deployment behavior. See Josh Clymer's post about Extending control evaluations to non-scheming threats.

Reply
How do you deal w/ Super Stimuli?
Cody Rushing8mo20

This might not work well for others, but a thing that's worked well for me has been to (basically) block cheap access to it with anticharities. Introducing friction in general is good

Reply
Anthropic: Three Sketches of ASL-4 Safety Case Components
Cody Rushing10mo30

I'm glad to see this. Some initial thoughts about the control safety case:

  • I don't get how the "monitoring game" is different from the red-team game? Is it because the blue team is limited to only considering protocols that use just monitoring? Or do you not expect to play a full red-team simulation (this would be worrisome)
  • Option 2 (honeypotting) feels doomed (in isolation). This primarily stems from my belief that making situations that are indistinguishable from deployment is probably hard - the model has control over state in very broad and hard-to-predict ways, and this can implicitly encode that it is in deployment (for instance, the model can see code that it itself has written and would've taken a long time to write, and realize it has likely been deployed for some time). You can still run honeypotting tests using this newer distribution, but then you have to hope that you are running honeypotting frequently enough to catch when the model is confident it is in deployment.
  • The elicitation problem, and the broader problem of justifying that the red-blue teaming has been sufficient, indeed feel quite hard. This is one reason as to why you might expect that control cases can only work for human-level AI - if your 'elicitation protocol' just ends up being 'use attack strategies made primarily by humans'.
    • There are related problems in justifying how many affordances to give to the red team. For instance, decisions related to if the red team should be able to give/train into the model 'coordination signals' are likely important, but might not be easy to make. Past research has been maximally conservative, but this might not be realistic.
  • The limitation that you are measuring P(catastrophe | misaligned) rather than just P(catastrophe) is fairly large, and I would be interested in discussion on how important this is to others. More broadly, I don't feel great about our chances of bounding P(catastrophe) for reasons such as this.
Reply1
A Narrow Path: a plan to deal with AI extinction risk
Cody Rushing11mo52

Am I correct in interpreting that your definition of "found system" would apply nearly all useful AI systems today such as ChatGPT, as these are algorithms which run on weights that are found with optimization methods such as gradient descent? If so, it is still fairly onerous.

Reply
A Narrow Path: a plan to deal with AI extinction risk
Cody Rushing11mo80

Thanks for writing this and proposing a plan. Coincidentally, I drafted a short take here yesterday explaining one complaint I currently have with the safety conditions of this plan. In short, I suspect the “No AIs improving other AIs” criterion isn't worth including within a safety plan: it i) doesn't address that many more marginal threat models (or does so ineffectively) and ii) would be too unpopular to implement (or, alternatively, too weak to be useful).

I think there is a version of this plan with a lower safety tax, with more focus on reactive policy and the other three criterion, that I would be more excited about.

Reply
You can remove GPT2’s LayerNorm by fine-tuning for an hour
Cody Rushing1y80

Another reason why layernorm is weird (and a shameless plug): the final layernorm also contributes to self-repair in language models

Reply
Load More
Rationality
2y
(+390/-118)
29Factored Cognition Strengthens Monitoring and Thwarts Attacks
2mo
0
124Ctrl-Z: Controlling AI Agents via Resampling
Ω
5mo
Ω
0
82[Paper] All's Fair In Love And Love: Copy Suppression in GPT-2 Small
Ω
2y
Ω
4