A task-based AGI or "genie" is an AGI intended to follow a series of human orders,orders or Tasks, with these ordersTasks each being of limited scope - "satisficing" in the sense that they can be accomplished using bounded amounts of effort and resources (as opposed to the goals being more and more fulfillable using more and more effort).
In Bostrom's typology, this is termed a Genie"Genie". It contrasts with a Sovereign"Sovereign" AGI that acts autonomously,autonomously in the pursuit of long-term real-world goals.
Building a safe Task AGI might be easier than building a safe Sovereign for the following reason:reasons:
The advantageRelative to the problem of building a Sovereign, trying to build a Genie is that it is plausibly easier, if not easy, to solve the safety problems - itinstead might step down the problem from "impossibly difficult" to "insanely difficult", while still maintaining enough power in the AI to perform pivotal acts.
The problem of making a safe genie invokes numerous subtopics such as low impact, mild optimization, and conservatism as well as numerous standard AGI safety problems like reflective stability and safe identification of intended goals.
The problem of making a safe genie invokes numerous subtopics such as low impact, mild optimization, and conservatism as well as numerous standard AGI safety problems like reflective stability and safe identification of intended goals.
A "Genie"task-based AGI or "genie" is an AI design putatively meantAGI intended to follow a series of human orders, primarily relying onwith these orders each being of limited scope - "satisficing" in the human ability to discern short-term strategiessense that achieve long-term value. (Possibly with some boost from the AI describing long-term consequencesthey can be accomplished using bounded amounts of effort and resources (as opposed to the programmers, but not relying primarily ongoals being more and more fulfillable using more and more effort).
In Bostrom's typology, a Genie contrasts with a Sovereign AGI because the AI's own ability to identify long-term valuable outcomes.) Since a Genie does not haveneed to act with total autonomy and nocan query users before carrying out orders; nor does the Genie need to discern for itself the best long-term futures; and the Genie can be further human input, it can also potentially be limited in various other respectsways (e.g. it doesn't have to engage in all-out self-improvement).
The advantage of a Genie is that arguably decreaseit is plausibly easier, if not easy, to solve the potential for immediate catastrophe, and incorporate humans into its decision loops as further checks. Genie theory is then an umbrella term for subtopicssafety problems - it might step down the problem from "impossibly difficult" to "insanely difficult", while still maintaining enough power in the theory of limited optimization, online checkability, and safeAI to perform identification of intended goalspivotal acts.
The term "Genie" was coinedobvious disadvantage of a Genie is moral hazard - it may tempt the users in [Bostrom's Superintelligence], which distinguished Genies from Sovereignsways that do their own long-term strategizing and act without further checks.
a Sovereign would not. A Sovereign has moral hazard during the development phase, while a Genie has ongoing moral hazard as it is used.
The term "Genie" is not synonymous with "Limited AI" since in principle we could have a bounded rationalan agent that was a Genie solely in virtue of its order-following preference framework, without any further attempt to limit capabilities.
The primary argument for pursuing Genies is in the hopeproblem of averting somemaking a safe genie invokes numerous subtopics such as low impact, mild optimization, and conservatism as well as numerous standard AGI safety problems like reflective stability and safe identification of the value alignment problems associated with Sovereigns, thereby stepping down the problem difficulty from "impossibly difficult" to "insanely difficult", while still maintaining enough power from the AI that it is decisive to the larger dilemma of value achievementintended goals.
The primary argument against pursuing Genies is a mixture of moral hazard and the worry that the diminished difficulty might be merely illusory (like setting out to develop a computer which is very good...
In Bostrom's typology, a Genie contrasts with a Sovereign AGI becausethat acts autonomously, in the Genie does not needpursuit of long-term real-world goals.
Building a safe Task AGI might be easier than building a safe Sovereign for the following reason:
The obvious disadvantage of a Genie is moral hazard - it may tempt the users in ways that a Sovereign would not. A Sovereign has moral hazard chiefly during the development phase, whilewhen the programmers and users are perhaps not yet in a position of special relative power. A Genie has ongoing moral hazard as it is used.
The term "Genie" is not synonymous with "Limited AI" since in principle we could have an agent that was a Genie solely in virtue of its order-following preference framework, without any further attempt to limit capabilities.
The obvious disadvantage of a GenieTask AGI is moral hazard - it may tempt the users in ways that a Sovereign would not. A Sovereign has moral hazard chiefly during the development phase, when the programmers and users are perhaps not yet in a position of special relative power. A GenieTask AGI has ongoing moral hazard as it is used.
Eliezer Yudkowsky has suggested that people only confront many important problems in value alignment when they are thinking about Sovereigns, but that at the same time, Sovereigns may be impossibly hard in practice. Yudkowsky advocates that people think about Sovereigns first and list out all the associated issues before stepping down their thinking to Genies,Task AGIs, because thinking about GeniesTask AGIs may result in premature pruning, while thinking about Sovereigns is more likely to generate a complete list of problems that can then be checked against particular GenieTask AGI approaches to see if those problems have become any easier.
Three distinguished subtypes of GenieTask AGI are these:
A task-based AGI is an AGI intended to follow a series of human orders or Tasks,human-originated orders, with these Tasksorders each being of limited scope - "satisficing" in the sense that they can be accomplished using bounded amounts of effort and resources (as opposed to the goals being more and more fulfillable using more and more effort).
Relative to the problem of building a Sovereign, trying to build a GenieTask AGI instead might step down the problem from "impossibly difficult" to "insanely difficult", while still maintaining enough power in the AI to perform pivotal acts.
Three distinguished subtypes of Genie are these:
-
Some subtopics of Genie theory are these:
-