Let's suppose we have an AGI running on a perfectly physically isolated computer that may be misalligned. It only communicates with select humans through a perfectly secure text channel.
How can such an AGI escape?
While don't think that AGI can manipulate EVERY person to do what AGI wants, it's better to be cautious.
But recently, I thought of steganography. An AGI can tell a human to convey a message to the public that looks helpful at the first glance, but in fact contains a steganographic message with instructions for building antimatter fusion reactor/nanobots/bioweapon/whatever.
What other ways do you think can such an AGI escape?

New to LessWrong?

New Answer
New Comment
6 comments, sorted by Click to highlight new comments since: Today at 2:48 AM

A useful framing is a human guarded by a bunch of dogs. Think of how the human could escape that dogs can't even conceive of.

Is it a good idea to be brainstorming ways ai can escape a box in public? seems like the same kind of thing as asking for people to brainstorm security vulnerabilities in public. they shouldn't necessarily stay private, but if we're aiming to close them, we should have some idea what our fix process is.

I don't think casual comments on a forum can match what is going on in professional discussions. And those professionals know to stay mum about them. Most of what is public on AGI escape is mostly for entertainment value.

I'm just declaring what you're saying you feel is safe to assume anyhow, so, yup

Hmm, I was somewhat worried about that, but there are way more dangerous things for AI to see written on the internet. 

If you're trying to create AGI by training it on a large internet crawl dataset, you have bigger problems...

To fix something, we need to know what to fix first.

Depends on what you mean by “perfectly isolated computer”, there are already methods for exploiting air gapped systems, an AI would be able to figure out more and more complicated exploits. 
Theres also the “air conditioner“ argument, where a schematic could be given to someone to build a machine that does X, but in reality it does something to benefit the AI. I also think memetic exploits exist, interrogation/negotiation techniques seem to imply they do, and given a certain amount of time (which you imply would be allowed.  Communicating with “select humans” implies a continued contact, not just one conversation per person) an AI would be able to psychoanalyze and exploit personality flaws.