LESSWRONG
LW

333
Wikitags
Main
3
LW Wiki

AIXI

Edited by Eliezer Yudkowsky, Brian Muhia, et al. last updated 6th Oct 2017
Requires: Solomonoff induction, Expected utility

Marcus Hutter's AIXI is the perfect rolling sphere of advanced agent theory - it's not realistic, but you can't understand more complicated scenarios if you can't envision the rolling sphere. At the core of AIXI is Solomonoff induction, a way of using infinite computing power to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with prior probabilities weighted by their algorithmic simplicity, and updating their probabilities based on how well they match observation. We then translate the agent problem into a sequence of percepts, actions, and rewards, so we can use sequence prediction. AIXI is roughly the agent that considers all computable hypotheses to explain the so-far-observed relation of sensory data and actions to rewards, and then searches for the best strategy to maximize future rewards. To a first approximation, AIXI could figure out every ordinary problem that any human being or intergalactic civilization could solve. If AIXI actually existed, it wouldn't be a god; it'd be something that could tear apart a god like tinfoil.

Further information:

  • Marcus Hutter's book on AIXI
  • Marcus Hutter's gentler introduction
  • Wikpedia article on AIXI
  • LessWrong Wiki article on AIXI
  • AIXIjs: Interactive browser demo and General Reinforcement Learning tutorial (JavaScript)
Parents:
Central examples
Methodology of unbounded analysis
Children:
AIXI-tl
Subscribe
Discussion
3
Subscribe
Discussion
3
Posts tagged AIXI
171An Intuitive Explanation of Solomonoff Induction
Alex_Altair
13y
230
50Failures of an embodied AIXI
So8res
11y
47
16Approximately Bayesian Reasoning: Knightian Uncertainty, Goodhart, and the Look-Elsewhere Effect
RogerDearnaley
2y
2
44The Problem with AIXI
Rob Bensinger
12y
80
22Intuitive Explanation of AIXI
Thomas Larsen
3y
2
87mAIry's room: AI reasoning to solve philosophical problems
Stuart_Armstrong
6y
41
46New intro textbook on AIXI
Alex_Altair
1y
8
46Launching new AIXI research community website + reading group(s)
Cole Wyeth
1mo
2
39Program Search and Incomplete Understanding
Diffractor
7y
1
30Versions of AIXI can be arbitrarily stupid
Stuart_Armstrong
10y
59
29Occam's Razor and the Universal Prior
Peter Chatain
4y
5
29Potential Alignment mental tool: Keeping track of the types
Ω
Donald Hobson
4y
Ω
1
26Rebuttals for ~all criticisms of AIXI
Ω
Cole Wyeth
8mo
Ω
17
26A utility-maximizing varient of AIXI
AlexMennen
13y
22
24Save the princess: A tale of AIXI and utility functions
Anja
13y
11
Load More (15/41)
Add Posts