392

LESSWRONG
LW

391
AI Boxing (Containment)AI PersuasionAI RiskOracle AI
Frontpage

3

[ Question ]

Oracle AGI - How can it escape, other than security issues? (Steganography?)

by RationalSieve
25th Dec 2022
1 min read
A
0
6

3

AI Boxing (Containment)AI PersuasionAI RiskOracle AI
Frontpage

3

Oracle AGI - How can it escape, other than security issues? (Steganography?)
2Shmi
2the gears to ascension
4Shmi
2the gears to ascension
1RationalSieve
1MacroMint
New Answer
New Comment
6 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:30 PM
[-]Shmi3y20

A useful framing is a human guarded by a bunch of dogs. Think of how the human could escape that dogs can't even conceive of.

Reply
[-]the gears to ascension3y20

Is it a good idea to be brainstorming ways ai can escape a box in public? seems like the same kind of thing as asking for people to brainstorm security vulnerabilities in public. they shouldn't necessarily stay private, but if we're aiming to close them, we should have some idea what our fix process is.

Reply
[-]Shmi3y40

I don't think casual comments on a forum can match what is going on in professional discussions. And those professionals know to stay mum about them. Most of what is public on AGI escape is mostly for entertainment value.

Reply
[-]the gears to ascension3y20

I'm just declaring what you're saying you feel is safe to assume anyhow, so, yup

Reply
[-]RationalSieve3y10

Hmm, I was somewhat worried about that, but there are way more dangerous things for AI to see written on the internet. 

If you're trying to create AGI by training it on a large internet crawl dataset, you have bigger problems...

To fix something, we need to know what to fix first.

Reply
[-]MacroMint2y10

Depends on what you mean by “perfectly isolated computer”, there are already methods for exploiting air gapped systems, an AI would be able to figure out more and more complicated exploits. 
Theres also the “air conditioner“ argument, where a schematic could be given to someone to build a machine that does X, but in reality it does something to benefit the AI. I also think memetic exploits exist, interrogation/negotiation techniques seem to imply they do, and given a certain amount of time (which you imply would be allowed.  Communicating with “select humans” implies a continued contact, not just one conversation per person) an AI would be able to psychoanalyze and exploit personality flaws. 

Reply
Moderation Log
More from RationalSieve
View more
Curated and popular this week
A
0
6

Let's suppose we have an AGI running on a perfectly physically isolated computer that may be misalligned. It only communicates with select humans through a perfectly secure text channel.
How can such an AGI escape?
While don't think that AGI can manipulate EVERY person to do what AGI wants, it's better to be cautious.
But recently, I thought of steganography. An AGI can tell a human to convey a message to the public that looks helpful at the first glance, but in fact contains a steganographic message with instructions for building antimatter fusion reactor/nanobots/bioweapon/whatever.
What other ways do you think can such an AGI escape?