Seems interesting. But this isn't lethal at all, and it's a little disturbing to hear the term used this way, as though this person has never even thought about AGI risks. Hopefully security problems and prompt injections are one route to increasing public awareness of the complexity and unreliability of AI behavior.
hm, yes, "lethal" is maybe too hard, esp. in the title, but I didn't choose that. Also, it is not unusual in colloquial use:
ChatGPT: What does a security researcher mean when a user action is lethal?
When a security researcher describes a user action as lethal, they typically mean it triggers a condition that irreversibly compromises the system's integrity, confidentiality, or availability—often without recourse for mitigation after the fact. This could include actions like clicking a malicious link that installs a rootkit, reusing credentials on a phishing site leading to credential stuffing, or executing a command that bricks a device. "Lethal" underscores not just the severity but also the finality: the action doesn't just degrade security but catastrophically collapses it, often silently and instantly.
TL;DR from the post by Simon Willison:
The lethal trifecta of capabilities is:
- Access to your private data—one of the most common purposes of tools in the first place!
- Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
- The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
This combination can let an attacker steal your data.