Diabloto96 | v1.7.0Mar 19th 2023 | |||
Kaytie_Horvath | v1.6.0Aug 1st 2022 | (+89/-41) | ||
Kaytie_Horvath | v1.5.0Jul 30th 2022 | (+11/-23) | ||
Yoav Ravid | v1.4.0Nov 4th 2021 | (+37) | ||
Ruby | v1.3.0Oct 1st 2020 | (+1711/-178) | ||
brook | v1.2.0Aug 28th 2020 | (+99/-8) | ||
Ruby | v1.1.0Apr 16th 2020 | (+190/-1699) | ||
plex | v0.0.6Oct 13th 2016 | added links to Arbital | ||
[anonymous] | v0.0.5Nov 7th 2014 | Undo revision 14602 by [[Special:Contributions/Josephthink|Josephthink]] ([[User talk:Josephthink|talk]]) | ||
[anonymous] | v0.0.4Nov 7th 2014 | /* Defense of the thesis */ wrong quantifier, incorrect argument (checked paper) |
The Orthogonality Thesis states that an artificial intelligenceagent can have any combination of intelligence level and final goal, that is, its final goals and intelligence levels can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal.
The Orthogonality Thesis is the ideastates that thean artificial intelligence can have any combination of intelligence level and goal, that is, its final goals and intelligence levels of artificial agents can vary independently of each other. An arbitrarily intelligent agent could have whatThis is in contrast to us seems likethe belief that, because of their intelligence, AIs will all converge to a very "stupid"common goal. The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will", (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.
Related Tags: Related: Complexity of Value, Decision Theory, General Intelligence, Utility Functions
It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,
One reason many researchers assume superintelligences to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Decision TheoryOracle AIs, would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.
There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.
The Orthogonality Thesis is the idea that the final goals and intelligence levels of artificial agents can vary independently of each other. An arbitrarily intelligent agent could have what to us seems like a very "stupid""stupid" goal.
Related Tags: Complexity of Value, Decision Theory, General Intelligence, Utility Functions
The orthogonality thesisOrthogonality Thesis states that an artificial intelligence can have any combination of intelligence level and goal. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal. The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will", (along with the instrumental convergence thesis). For his purposes Bostrom defines intelligence to be instrumental rationality.
It has been pointed outidea that the orthogonality thesis is the default position,final goals and that the burdenintelligence levels of proof is on claims that limit possible AIs. Stuart Armstrong writes that,
One reason many researchers assume superintelligencesartificial agents can vary independently of each other. An arbitrarily intelligent agent could have what to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there isus seems like a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.very "stupid" goal.
There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit degree of intelligence of the AI.
The thesis was originally defined by Nick Bostrom in the paper
"Superintelligent Will""Superintelligent Will", (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.One reason many researchers assume
superintelligencessuperintelligent agents to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.