I like the way you're thinking about AI and security. The insider vs outsider risk dynamic is very ingrained into Cybersecurity and is often discussed while threat modeling a system. The distinction is muddled because the most impactful external threat is one that steals an insiders credentials. History shows it's only a matter of time before insider permissions are used by an outsider, which gave birth to the phrase "assume breach". If the AI is a black-box, it's safest to assume it has been compromised. I'd like to add another dynamic, malicious vs accidental, to your AI insider risk model that makes clear why AIs should not be granted Insider privileges like human employees.
An insider has the ability to cause damage maliciously or accidentally. Accidental damage is by far the most common event, but we have decades, centuries if you count other engineering disciplines, of experience putting processes in place to prevent and learn from accidents. Generally the control mechanism slows the person down by asking for social proof, another person's sign-off, and that leads to a key mitigating control on their access: it's hard to cause major damage when action is slow.
Adding social proof adds transparency, covers knowledge gaps and increases social pressure to get it right. People align with the goals of their group and intrinsically do not want to damage their group. When they harm the group accidentally it has a social cost, which acts as a preventative control.
AIs act quickly and do not pay the same social cost. So those preventative controls do not count towards balancing the risk of overly broad insider permissions.
Until we have Explainable AI, it will be risky to fully trust that an AI is free of hidden malicious intent or accident prone edge cases. The breadth of their input space makes firewalls inefficient and the speed they can act makes detective controls ineffective.
AIs speed, lack of human-like social consequences and opaque decision making create a large source of risk that puts AI outside of what qualifies employees for insider privileges. Similarly their speed, clone-ability, and adaptability mean they don't need to work like humans and don't need insider privileges.
An AI can lookup and read documentation to acquire context faster than any person. It can be trained to do a new task (at least ones it's capable of) almost instantly. It can pass context to another AI much faster than a person. It can request permissions on a per task basis for specific rows/columns/fields in a database with minimal slowdown. An AI doesn't need to see the bigger picture, have a connection to the group or have novelty to stay motivated the way a person does. A person would quit if their job was to do the same step in a workflow 1000 times in a row and ask for permission with a unique identifier each time, so we built workflows to maintain employee happiness.
Separation of concerns, minimizing blast radius, observability and compose-ability are vital to building resilient software services. AIs do not change those principles. Using one AI for an entire workflow or the same AI across different workflows is not necessary the way it is for a person. So giving them permissions the way we do people is not necessary. Running instances of multiple different 600B+ parameter models may be prohibitively expensive today, but eventually it will be cost effective to have single use model instances in a similar style as AWS's Lambdas.
I like the way you're thinking about AI and security. The insider vs outsider risk dynamic is very ingrained into Cybersecurity and is often discussed while threat modeling a system. The distinction is muddled because the most impactful external threat is one that steals an insiders credentials. History shows it's only a matter of time before insider permissions are used by an outsider, which gave birth to the phrase "assume breach". If the AI is a black-box, it's safest to assume it has been compromised. I'd like to add another dynamic, malicious vs accidental, to your AI insider risk model that makes clear why AIs should not be granted Insider privileges like human employees.
An insider has the ability to cause damage maliciously or accidentally. Accidental damage is by far the most common event, but we have decades, centuries if you count other engineering disciplines, of experience putting processes in place to prevent and learn from accidents. Generally the control mechanism slows the person down by asking for social proof, another person's sign-off, and that leads to a key mitigating control on their access: it's hard to cause major damage when action is slow.
Adding social proof adds transparency, covers knowledge gaps and increases social pressure to get it right. People align with the goals of their group and intrinsically do not want to damage their group. When they harm the group accidentally it has a social cost, which acts as a preventative control.
AIs act quickly and do not pay the same social cost. So those preventative controls do not count towards balancing the risk of overly broad insider permissions.
Until we have Explainable AI, it will be risky to fully trust that an AI is free of hidden malicious intent or accident prone edge cases. The breadth of their input space makes firewalls inefficient and the speed they can act makes detective controls ineffective.
AIs speed, lack of human-like social consequences and opaque decision making create a large source of risk that puts AI outside of what qualifies employees for insider privileges. Similarly their speed, clone-ability, and adaptability mean they don't need to work like humans and don't need insider privileges.
An AI can lookup and read documentation to acquire context faster than any person. It can be trained to do a new task (at least ones it's capable of) almost instantly. It can pass context to another AI much faster than a person. It can request permissions on a per task basis for specific rows/columns/fields in a database with minimal slowdown. An AI doesn't need to see the bigger picture, have a connection to the group or have novelty to stay motivated the way a person does. A person would quit if their job was to do the same step in a workflow 1000 times in a row and ask for permission with a unique identifier each time, so we built workflows to maintain employee happiness.
Separation of concerns, minimizing blast radius, observability and compose-ability are vital to building resilient software services. AIs do not change those principles. Using one AI for an entire workflow or the same AI across different workflows is not necessary the way it is for a person. So giving them permissions the way we do people is not necessary. Running instances of multiple different 600B+ parameter models may be prohibitively expensive today, but eventually it will be cost effective to have single use model instances in a similar style as AWS's Lambdas.