You are viewing revision 1.7.0, last edited by JoshuaFox

It has been pointed out by Eliezer Yudkowsky and others that AIXI does not model itself. AIXI is simply a calculation of the best possible action, extrapolating into the future, and at each step choosing the best action, which is calculated by recursively calculating the next step and so on into the horizon.

AIXI is very simple math. AIXI does not include a model of itself to figure out what actions it will take in the future. Implicit in its definition is the assumption that it will continue, up until its horizon, to choose actions that maximize expected future value. AIXI's definition assumes that the maximizing action will always be chosen, despite the fact that the agent’s implementation was predictably destroyed or changed. This is not accurate for real-world implementations which may malfunction, self-modify, be destroyed, be changed, etc.

Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged, and an implementation which could be changed or could change its behavior due to bugs. The AIXI formalism completely ignores these possibilities (Yampolskiy & Fox, 2012).

This is called the Anvil problem: AIXI would not care if an anvil was about to drop on its head.

The "Anvil problem" is not a mere detail necessarily left out of a formalized abstraction. Self-analysis and self-modification are likely to be essential parts of any future Friendly AI. First, as the AI must strive to avoid changes in its own goal system, the question of self-modeling cannot be ignored. Our decision theory must be improved to include reflection.

Second, because human values are not well-understood or formalized, the FAI may need to refine its goal of maximizing human values. "Refining" the goal without changing its essentials is another demanding problem in reflective decision theory.

Third, an FAI may choose to self-improve, to enhance its own intelligence to better achieve its goals. It may do so by altering its own implementation or by creating a new generation of AI, perhaps without regard for the destruction of the current implementation, so long as the new system can better achieve the goals. All these forms of self-modification again raise central questions about the self-model of the AI, which, as mentioned, is ignored by AIXI.

References

R.V. Yampolskiy, J. Fox (2012) Artificial General Intelligence and the Human Mental Model. In Amnon H. Eden, Johnny Søraker, James H. Moor, Eric Steinhart (Eds.), The Singularity Hypothesis.The Frontiers Collection. London: Springer.