A task-based AGI or "genie" is an AGI intended to follow a series of human orders, with these orders each being of limited scope - "satisficing" in the sense that they can be accomplished using bounded amounts of effort and resources (as opposed to the goals being more and more fulfillable using more and more effort).
In Bostrom's typology, a Genie contrasts with a Sovereign AGI that acts autonomously, in the pursuit of long-term real-world goals.
Building a safe Task AGI might be easier than building a safe Sovereign for the following reason:
The advantage of a Genie is that it is plausibly easier, if not easy, to solve the safety problems - it might step down the problem from "impossibly difficult" to "insanely difficult", while still maintaining enough power in the AI to perform pivotal acts.
The obvious disadvantage of a Genie is moral hazard - it may tempt the users in ways that a Sovereign would not. A Sovereign has moral hazard chiefly during the development phase, when the programmers and users are perhaps not yet in a position of special relative power. A Genie has ongoing moral hazard as it is used.
The problem of making a safe genie invokes numerous subtopics such as low impact, mild optimization, and conservatism as well as numerous standard AGI safety problems like reflective stability and safe identification of intended goals.
Eliezer Yudkowsky has suggested that people only confront many important problems in value alignment when they are thinking about Sovereigns, but that at the same time, Sovereigns may be impossibly hard in practice. Yudkowsky advocates that people think about Sovereigns first and list out all the associated issues before stepping down their thinking to Genies, because thinking about Genies may result in premature pruning, while thinking about Sovereigns is more likely to generate a complete list of problems that can then be checked against particular Genie approaches to see if those problems have become any easier.
Three distinguished subtypes of Genie are these:
Some further problems beyond those appearing in the page above are: