A "Genie" is an AI design putatively meant to follow human orders, primarily relying on the human ability to discern short-term strategies that achieve long-term value. (Possibly with some boost from the AI describing long-term consequences to the programmers, but not relying primarily on the AI's own ability to identify long-term valuable outcomes.) Since a Genie does not have to act with total autonomy and no further human input, it can also potentially be limited in various other respects that arguably decrease the potential for immediate catastrophe, and incorporate humans into its decision loops as further checks. Genie theory is then an umbrella term for subtopics in the theory of limited optimization, online checkability, and safe identification of intended goals.
The term "Genie" was coined in [Bostrom's Superintelligence], which distinguished Genies from Sovereigns that do their own long-term strategizing and act without further checks.
The term "Genie" is not synonymous with "Limited AI" since in principle we could have a bounded rational agent that was a Genie solely in virtue of its order-following preference framework, without any further attempt to limit capabilities.
The primary argument for pursuing Genies is in the hope of averting some of the value alignment problems associated with Sovereigns, thereby stepping down the problem difficulty from "impossibly difficult" to "insanely difficult", while still maintaining enough power from the AI that it is decisive to the larger dilemma of value achievement.
The primary argument against pursuing Genies is a mixture of moral hazard and the worry that the diminished difficulty might be merely illusory (like setting out to develop a computer which is very good at addition but not multiplication).
Eliezer Yudkowsky has suggested that people only confront many important problems in value alignment when they are thinking about Sovereigns, but that at the same time, Sovereigns may be impossibly hard in practice. Yudkowsky advocates that people think about Sovereigns first and list out all the associated issues before stepping down their thinking to Genies, because thinking about Genies may result in premature pruning, while thinking about Sovereigns is more likely to generate a complete list of problems that can then be checked against particular Genie approaches to see if they have become any easier.
Three distinguished subtypes of Genie are these:
Some subtopics of Genie theory are these: