Oracle AI

Joel Burget (+10/-10)
Multicore (+9)
Ruby (+50/-50)
Kaj_Sotala (+509/-569)
pedrochaves (+340/-144)
pedrochaves (+6/-7)
pedrochaves (+233/-47)
pedrochaves (+142/-81)
pedrochaves (+30/-6)
pedrochaves (+20)

In a related work, Dreams of Friendliness, Eliezer Yudkowsky gives an informal argument stating that all oracles will be agent-like, that is, driven by its own goals. He rests on the idea that anything considered "intelligent" must choose the correct course of action among all actions avaliable.available. That means that the Oracle will have many possible things to believe, although very few of them are correct. Therefore believing the correct thing means some method was used to select the correct belief from the many incorrect beliefs. By definition, this is an optimization process which has a goal of selecting correct beliefs.

The question of whether Oracles – or just keeping an AGI forcibly confined - are safer than fully free AGIs has been the subject of debate for a long time. Armstrong, Sandberg and Bostrom discuss Oracle AI safety at length in their Thinking inside the box: using and controlling an Oracle AI. In the paper, the authors propose a generic theoretical conceptual architecture to create such a system and review various methods which might be used to measure thean Oracle's accuracy. They also try to shed some light on some weaknesses and dangers that can emerge on the human side (suchside, such as psychological vulnerabilities which can be exploited by the Oracle through social engineering, for example). Someengineering. The paper discusses ideas for physical security – also known as “boxing” - are also discussed(“boxing”), as well as which questions may be safeproblems involved with trying to ask, utility indifference, and many other factors. The paper’sprogram the AI to only answer questions. In the end, the paper reaches the cautious conclusion that Oracles – or AI boxing concepts in general - areof Oracle AIs probably being safer than fully free agent AIs has been a subject of debate for a long time.AGIs.

AfterOne can then imagine all the establishmentthings that might be useful in achieving the goal of a goal, one can imagine things the optimization process might do towards achieving that goal. This means that, for"have correct beliefs". For instance, this process could involve gatheringacquiring more computing power and resources to answers questions. could help this goal. As such, an Oracle could determine that it might answer more accurately and easily to a certain question if it turned all matter outside the box into computronium, therefore killing all the existing life.

Armstrong, Sandberg and Bostrom discuss Oracle AI safety at length in their Thinking inside the box: using and controlling an Oracle AI. TheIn the paper, the authors propose a generic theoretical conceptual architecture to create such a system, besides reviewing how onesystem and review various methods which might be used to measure it accuracy andthe Oracle's accuracy. They also try to shed some light on some weaknesses and dangers that can emerge on the human level considerations. Amongside (such as psychological vulnerabilities which can be exploited by the last areOracle through social engineering, for example). Some ideas for physical security – also known as “boxing” -, the potential for the oracle to use social engineering,- are also discussed as well as which questions may be safe to ask, utility indifference, and many other factors. The paper’s conclusion that Oracles – or AI boxing concepts in general - are safer than fully free agent AIs has been a subject of debate for a long time.

After the establishment of a goal, one can imagine things the optimization process might do towards achieving that goal. This means that, for instance, this process could involve gathering more computing power and resources to answers questions. As such, an Oracle could determine that it might answeraanswer more accurately and easily to a certain question if it turned all matter outside the box in computronium, therefore killing all the existing life.

In a related work, Dreams of Friendliness, Eliezer Yudkowsky gives an informal argument stating that all oracles will be agent-like.like, that is, driven by its own goals. He rests on the idea that anything considered "intelligent" must choose the correct course of action among all actions avaliable. That means that the Oracle will have many possible things to believe, although very few of them are correct. Therefore believing the correct thing means some method was used to select the correct belief from the many incorrect beliefs. By definition, this is an optimization process which has a goal of selecting correct beliefs.

After the establishment of a goal, one can imagine things the optimization process might do towards achieving that goal. This means that, for instance, thethis process could involve gathering more computing power and resources to answers questions. As such, an Oracle could answerdetermine that it might answera more accurately and easily to a certain question if it killed all life on earth or turnturned all matter outside the box in computronium., therefore killing all the existing life.

Armstrong, Sandberg and Bostrom discuss Oracle AI safety at length in their Thinking inside the box: using and controlling an Oracle AI. The authors propose a conceptual architecture to create such a system, besides reviewing how one might measure it accuracy and shed some light on human level considerations. Among the last are physical security – also known as “boxing” -, the potential for the oracle to use social engineering, which questions may be safe to ask, utility indifference, and many other factors. The paper’s conclusion that Oracles – or AI boxing concepts in general - are safer than fully free agent AIs has however raised much debate.been a subject of debate for a long time.

In a related work, Dreams of Friendliness, Eliezer Yudkowsky gives an informal argument stating that all oracles will be agent-like. He rests on the idea that anything considered "intelligent" must be an optimization process.choose the correct course of action among all actions avaliable. That means that the Oracle will have many possible things to believe andbelieve, although very few correct beliefs.of them are correct. Therefore believing the correct thing means some method was used to select the correct belief from the many incorrect beliefs. By definition, this is an optimization process which has a goal of selecting correct beliefs. After the establishment of a goal, one can imagine things the optimization process might do towards that goal. This means that, for instance, the Oracle could answer more accurately and easily to a certain question if it killed all life on earth or turn all matter outside the box in computronium.

The paper’s conclusion that Oracles – or AI boxing concepts in general - are safer than fully free agent AIs has however raised much debate.

In a related work, Dreams of Friendliness, Eliezer Yudkowsky gives an informal argument stating that all oracles will be agent-like. ItHe rests on the factidea that anything considered "intelligent" must be an optimization process. That means that the Oracle will have many possible things to believe and very few correct beliefs. Therefore believing the correct thing means some method was used to select the correct belief from the many incorrect beliefs. By definition, this is an optimization process which has a goal of selecting correct beliefs. After the establishment of a goal, one can imagine things the optimization process might do towards that goal. This means that, for instance, the Oracle could answer more accurately and easily to a certain question if it killed all life on earth or turn all matter outside the box in computronium.

The paper’s conclusion that Oracles – or AI boxing concepts in general - are safer than fully free agent AIs has raised much debate. In Dreams of Friendliness, Eliezer Yudkowsky gives an informal argument stating that all oracles will be agent-like. It rests on the fact that anything considered "intelligent" must be an optimization process. That means that the Oracle will have many possible things to believe and very few correct beliefs. Therefore believing the correct thing means some method was used to select the correct belief from the many incorrect beliefs. By definition, this is an optimization process which has a goal of selecting correct beliefs. After the establishment of a goal, one can imagine things the optimization process might do towards that goal. This means that, for instance, the Oracle could answer more accurately and easily to a certain question if it killed all life on earth or turn all matter outside the box in computronium.

Load More (10/17)