Swimmer963 (Miranda Dixon-Luinenburg) | v1.11.0Sep 16th 2020 | |||
Grognor | v1.10.0Jan 31st 2013 | (+29/-11) name change | ||
Michael_Anissimov | v1.9.0Nov 22nd 2012 | (+14/-14) | ||
Kaj_Sotala | v1.8.0Oct 2nd 2012 | (+22/-36) | ||
Kaj_Sotala | v1.7.0Oct 2nd 2012 | (+363) | ||
Kaj_Sotala | v1.6.0Oct 2nd 2012 | |||
Kaj_Sotala | v1.5.0Oct 2nd 2012 | (+37/-140) | ||
Costanza R | v1.4.0Sep 29th 2012 | (+676) | ||
Kaj_Sotala | v1.3.0Jul 2nd 2012 | (+59/-15) | ||
Alex_Altair | v1.2.0Jun 25th 2012 | (+28) /* Blog posts */ |
In "Dreams"Dreams of Friendliness" Eliezer Yudkowsky argues that if you have an Oracle AI, then you can ask it, "What should I do?" If it can answer this question correctly, then it is FAI-complete.
Similarly, if you have a tool AI, it must make extremely complex decisions about how many resources it can use, how to display answers in human understandable yet accurate form, et cetera.cetera. The many ways in which it could choose catastrophically require it to be FAI-complete. Note that this does not imply that an agent-like, fully free FAI is easier to create than any of the other proposals.
The Singularity Institute argues that any safe superintelligence architecture is FAI-complete. For example, the following have been proposed as possiblehypothetical safe AGI architectures:designs:
The Singularity Institute argues that any safe superintelligence architecture is FAI-complete. For example, the following have been proposed as possible safe AGI architectures:
A problem is Friendly AI-complete if solving it is equivalent to creating Friendly AI. Friendliness theory is one proposal among many for ushering in safe Artificial General Intelligence technology. The Singularity Institute argues that any safe superintelligence architecture is FAI-complete. For example,
Goertzel proposed a "Nanny AI"(Should humanity build a global AI nanny to delay the singularity until it’s better understood?) with moderate superhuman intelligence, able to forestall Singularity eternally, or to delay it. However, it has been noticedargued by Luke Muehlhauser and Anna Salamon (Intelligence Explosion: Evidence and Import) that a Nanny AI is FAI-complete. In fact,They claim that building it wouldcould require to solvesolving all the problems required to build Friendly Artificial Intelligence.
Goertzel proposed a "Nanny AI"(Should humanity build a global AI nanny to delay the singularity until it’s better understood?) with moderate superhuman intelligence, able to forestall Singularity eternally, or to delay it. However, it has been noticed by Luke Muehlhauser and Anna Salamon (Intelligence Explosion: Evidence and Import) that a Nanny AI is FAI-complete. In fact, building it would require to solve the problems required to build Friendly Artificial Intelligence.
A problem is FAI-Friendly AI-complete if solving it is equivalent to creating FAI.Friendly AI. Friendliness theory is one proposal among many for ushering in safe AGIArtificial General Intelligence technology. The Singularity Institute argues that any safe superintelligence architecture is FAI FAI-complete. For example,
The
SingularityMachine Intelligence Research Institute argues that any safe superintelligence architecture is FAI-complete. For example, the following have been proposed as hypothetical safe AGI designs: