Orthogonality Thesis

Diabloto96
Kaytie_Horvath (+89/-41)
Kaytie_Horvath (+11/-23)
Yoav Ravid (+37)
Ruby (+1711/-178)
brook (+99/-8)
Ruby (+190/-1699)
plex added links to Arbital
[anonymous] Undo revision 14602 by [[Special:Contributions/Josephthink|Josephthink]] ([[User talk:Josephthink|talk]])
[anonymous] /* Defense of the thesis */ wrong quantifier, incorrect argument (checked paper)

The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will""Superintelligent Will", (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.

One reason many researchers assume superintelligencessuperintelligent agents to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

The Orthogonality Thesis states that an artificial intelligenceagent can have any combination of intelligence level and final goal, that is, its final goals and intelligence levels can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal.

The Orthogonality Thesis is the ideastates that thean artificial intelligence can have any combination of intelligence level and goal, that is, its final goals and intelligence levels of artificial agents can vary independently of each other. An arbitrarily intelligent agent could have whatThis is in contrast to us seems likethe belief that, because of their intelligence, AIs will all converge to a very "stupid"common goal. The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will", (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.

Related Tags: Related: Complexity of Value, Decision Theory, General Intelligence, Utility Functions

Defense of the thesis

It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,

One reason many researchers assume superintelligences to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Decision TheoryOracle AIs, would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.

See Also

External links

The Orthogonality Thesis is the idea that the final goals and intelligence levels of artificial agents can vary independently of each other. An arbitrarily intelligent agent could have what to us seems like a very "stupid""stupid" goal.

Related Tags: Complexity of Value, Decision Theory, General Intelligence, Utility Functions

The orthogonality thesisOrthogonality Thesis states that an artificial intelligence can have any combination of intelligence level and goal. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal. The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will", (along with the instrumental convergence thesis). For his purposes Bostrom defines intelligence to be instrumental rationality.

Defense of the thesis

It has been pointed outidea that the orthogonality thesis is the default position,final goals and that the burdenintelligence levels of proof is on claims that limit possible AIs. Stuart Armstrong writes that,

One reason many researchers assume superintelligencesartificial agents can vary independently of each other. An arbitrarily intelligent agent could have what to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there isus seems like a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.very "stupid" goal.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit degree of intelligence of the AI.

Blog posts

See also

External links

Load More (10/14)