"Only if you have a whitelist (can be implemented as "limited range") of actions."
I'm not sure this is the only way, but it is one way. You still cannot guarantee this with a human admin, but, more importantly, you are still missing the point. AI is not an employee with agency, it is a tool that a human admin can control, like any other form of automation.
You haven’t heard it before because you probably don’t work in IT.
AI is a tool - just the next evolution in automation, like scripting or macros. It's not an "employee" with benefits or intent. Framing it as an insider vs outsider threat fundamentally misunderstands how we’ve always approached internal risk in real-world systems. IT has always been about doing more with fewer people, using the tools available. This framing doesn’t reflect practice - it reflects a lack of exposure to it.
This is a decades-old problem—internal IT has always had to trust human admins at some point. But the biggest threats to systems are rarely technical; they're social. A properly aligned AI is actually easier to control than a human—an AI won’t plug in a USB stick it found in the parking lot out of curiosity. Most importantly, internal AI alignment can be explicitly enforced. Human alignment cannot. This analysis doesn’t really reflect how real-world information systems are managed.
Comments