Let's suppose we have an AGI running on a perfectly physically isolated computer that may be misalligned. It only communicates with select humans through a perfectly secure text channel.
How can such an AGI escape?
While don't think that AGI can manipulate EVERY person to do what AGI wants, it's better to be cautious.
But recently, I thought of steganography. An AGI can tell a human to convey a message to the public that looks helpful at the first glance, but in fact contains a steganographic message with instructions for building antimatter fusion reactor/nanobots/bioweapon/whatever.
What other ways do you think can such an AGI escape?
A useful framing is a human guarded by a bunch of dogs. Think of how the human could escape that dogs can't even conceive of.
Is it a good idea to be brainstorming ways ai can escape a box in public? seems like the same kind of thing as asking for people to brainstorm security vulnerabilities in public. they shouldn't necessarily stay private, but if we're aiming to close them, we should have some idea what our fix process is.
I don't think casual comments on a forum can match what is going on in professional discussions. And those professionals know to stay mum about them. Most of what is public on AGI escape is mostly for entertainment value.
I'm just declaring what you're saying you feel is safe to assume anyhow, so, yup
Hmm, I was somewhat worried about that, but there are way more dangerous things for AI to see written on the internet.
If you're trying to create AGI by training it on a large internet crawl dataset, you have bigger problems...
To fix something, we need to know what to fix first.