LESSWRONG
LW

Guaranteed Safe AIOpen Agency ArchitectureAI
Frontpage

69

Davidad's Provably Safe AI Architecture - ARIA's Programme Thesis

by simeon_c
1st Feb 2024
AI Alignment Forum
1 min read
17

69

Ω 25

This is a linkpost for https://www.aria.org.uk/wp-content/uploads/2024/01/ARIA-Safeguarded-AI-Programme-Thesis-V1.pdf
Guaranteed Safe AIOpen Agency ArchitectureAI
Frontpage

69

Ω 25

Davidad's Provably Safe AI Architecture - ARIA's Programme Thesis
7Nathan Helm-Burger
4simeon_c
1quetzal_rainbow
4Roko
13Vladimir_Nesov
3Roko
4the gears to ascension
3Roko
1quetzal_rainbow
2Roko
1quetzal_rainbow
2Roko
1quetzal_rainbow
4Bogdan Ionut Cirstea
1Chris Lakin
1martinkunev
5davidad
New Comment
17 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:20 AM
[-]Nathan Helm-Burger1y72

I mean, it sounds good in theory. My main hesitation about it is the feasibility of enacting it. I'm not sure I'm convinced about a "95% safety tax" (aiming for only 5% of the counterfactual economic value of unconstrained AI) being something that will be sufficiently tempting to be economically-self-promoting. So, probably this needs to be combined with a worldwide enforcement regime to prevent bad actors from relaxing the safety constraints?

Maybe there'll be answers to my questions in the doc. I've only looked at the pdf so far.

Reply
[-]simeon_c1y40

Yeah basically Davidad has not only a safety plan but a governance plan which actively aims at making this shift happen!

Reply
[-]quetzal_rainbow1y10

If I understand doc right, they mean 5% of counterfactual impact before ending acute risk period.

Reply
[-]Roko1y4-1

Just from a high level, I think the general style of approaches to AI safety that imagine human researchers solving hard technical challenges in order to make ASI safe enough to deploy widely are fundamentally flawed.

The most likely way to get to extremely safe AGI or ASI systems is not by humans creating them, it's by other less-safe AGI systems creating them.

"Technological advance is an inherently iterative process. One does not simply take sand from the beach and produce a Dataprobe. We use crude tools to fashion better tools, and then our better tools to fashion more precise tools, and so on."

For this reason I see approaches that try to make ASI that is provably safe to be mistaken. We should instead just aim to make AGI that's very capable and steer its use towards the next generation of safer systems, and that steering likely won't involve proofs. 

Reply1
[-]Vladimir_Nesov1y131

The most likely way to get to extremely safe AGI or ASI systems is not by humans creating them, it's by other less-safe AGI systems creating them.

This does seem more likely, but managing to sidestep the less-safe AGI part would be safer. In particular, it might be possible to construct a safe AGI by using safe-if-wielded-responsibly tool AIs (that are not AGIs), if humanity takes enough time to figure out how to actually do that.

Reply
[-]Roko1y31

The current paradigm of AI research makes it hard to make really pure tool AIs.

We have software tools, like Wolfram Alpha, and we have LLM-derived systems.

This is probably the set of tools we will either win or die with

Reply
[-]the gears to ascension1y40

I don't see anything in here that forbids using weaker AI systems to help with this plan. But how do you ever know when you've succeeded if it's not by proofs? proving through ais is not out of the question, it's a thing that is done to some kinds of safety critical deep learning AIs now.

Reply
[-]Roko1y30

I think the first step will be using AGIs to come up with better plans.

Reply2
[-]quetzal_rainbow1y10

My general concern with "using AGI to come up with better plans" is not that they will successfully manipulate us into misalignment, but Goodhart us into reinforcing stereotypes of "how good plan should look" or somewhat along this dimension, purely from how RLHF-style steering works.

Reply
[-]Roko1y20

Humans already do this, except we have made it politically incorrect to talk about the ways in which human-generated Goodhearting make the world worse (race, gender, politics etc)

Reply
[-]quetzal_rainbow1y10

Your examples are clearly visible. If your wrong alignment paradigm get reinforced because of your attachment to specific model of causality known to ten people in entire world, you risk to notice this too late.

Reply
[-]Roko1y20

because of your attachment to specific model of causality known to ten people in entire world, you risk to notice this too late.

you're thinking about this the wrong way. AGI governance will not operate like human governance.

Reply
[-]quetzal_rainbow1y10

Can you elaborate? I don't understand where we disagree.

Reply
[-]Bogdan Ionut Cirstea1y40

'theoretical paper of early 2023 that I can't find right now' -> perhaps you're thinking of Fundamental Limitations of Alignment in Large Language Models? I'd also recommend LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?.

Reply
[-]Chris Lakin1y10

New better link: https://www.aria.org.uk/programme-safeguarded-ai/ 

Reply
[-]martinkunev1y10

Max Tegmark presented similar ideas in a TED talk (without much details). I'm wondering if he and Davidad are in touch.

Reply
[-]davidad1y51

Yes. You will find more details in his paper, Provably safe systems with Steve Omohundro, in which I am listed in the acknowledgments (under my legal name, David Dalrymple).

Max and I also met and discussed the similarities in advance of the AI Safety Summit in Bletchley.

Reply
Moderation Log
Curated and popular this week
17Comments

The programme thesis of Davidad's agenda to develop provably safe AI has just been published. For context, Davidad is a Programme Director at ARIA who will grant somewhere between £10M and £50M over the next 3 years to pursue his research agenda. 

It is the most comprehensive public document detailing his agenda to date.

Here's the most self-sufficient graph explaining it at a high-level although you'll have to dive into the details and read it several times to start grasping the many dimensions of it. 

 

 

I'm personally very excited by Davidad's moonshot that I currently see as the most credible alternative to scaled transformers, which I consider to be too flawed to be a credible safe path, mostly because:

  • Ambitious LLM interpretability seems very unlikely to work out: 
    • Why: the failed attempts at making meaningful progress of the past few years + the systematic wall of understanding of ~80% of what's going on across reverse engineering attempts 
  • Adversarial robustness to jailbreak seems unlikely to work out:
    • Why: failed attempts at solving it + a theoretical paper of early 2023 that I can't find right now + increasingly large context windows
  • Safe generalization with very high confidence seems quite unlikely to work out
    • Why: absence of theory on transformers + weak interpretability

A key motivation for pursuing moonshots a la Davidad is, as he explains in his thesis, to shift the incentives from the current race to the bottom, by derisking credible paths to AI systems where we have strong reasons to expect confidence in the safety of systems. See the graph below: 

Mentioned in
47What does davidad want from «boundaries»?
11ARIA's Safeguarded AI grant program is accepting applications for Technical Area 1.1 until May 28th