One of the reasons why a Task-based Artificial General Intelligence (aka genie) can potentially be safer than an Autonomous AGI, is that since Task-based AGIs only need to carry out activities of limited scope, they may only need limited material and cognitive powers to carry out those tasks. While the nonadversarial principle suggests that the Omni Test should still apply, limiting the AGI's powers to only what it needs could serve as a second line of defense, by relatively diminishing the potential impact of errors. The essential difficulty of Limiting an AGI is that increasing material and cognitive powers is instrumentally convergent in all sorts of places and would need to be averted all over the place.
It might be productive to view AGI limitation as a subcase of Corrigibility, since it deals in averting an instrumental pressure, and seems like a type of precaution that a generic agent would desire to construct into a generic imperfectly-aligned agent.
The research avenue of Mild optimization can be viewed as pursuing a kind of very general Limitation.
Behaviorism would Limit the AGI's ability to model other minds in non-whitelisted detail.
Good Limitation proposals are not as easy as they look because particular domain capabilities derive from more general architectures. An Artificial General Intelligence doesn't have a handcrafted 'thinking about cars' module and a handcrafted 'thinking about planes' module, so you can't just handcraft the two modules at different levels of ability. Many have suggested that 'drive' or 'emotion' is something that can be selectively removed from AGIs to 'limit' their ambitions; presumably these people are using a mental model that is not the standard expected utility agent model. To know which kind of limitations are easy, you need a sufficiently good background picture of the AGI's subprocesses that you understand which kind of system capabilities will naturally carve at the joints.