Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 9:31 PM

The link no longer works (I get "This project has not yet been moved into the new version of Overleaf. You will need to log in and move it in order to continue working on it.") Would you be willing to re-post it or move it so that it is visible?

See if this works.

That link works, thanks!

In the interests of beautifying the output, wherever you want to include words in a LaTeX mathematics context, it should not be done like this: $f(AIXI\ output)$, but like this: $f(\mbox{\textit{AIXI output}})$. That will ensure that the word or phrase is properly kerned and spaced, instead of being treated like a sequence of one-letter mathematical symbols.

Note that the problem with exploration already arises in ordinary reinforcement learning, without going into "exotic" decision theories. Regarding the question of why humans don't seem to have this problem, I think it is a combination of

  • The universe is regular (which is related to what you said about "we can't see any plausible causal way it could happen"), so a Bayes-optimal policy with a simplicity prior has something going for it. On the other hand, sometimes you do need to experiment, so this can't be the only explanation.

  • Any individual human has parents that teach em things, including things like "touching a hot stove is dangerous." Later in life, ey can draw on much of the knowledge accumulated by human civilization. This tunnels the exploration into safe channels, analogously to the role of the advisor in my recent posts.

  • One may say that the previous point only passes the recursive buck, since we can consider all of humanity to be the "agent". From this perspective, it seems that the universe just happens to be relatively safe, in the sense that it's pretty hard for an individual human to do something that will irreparably damage all of humanity... or at least it was the case during most of human history.

  • In addition, we have some useful instincts baked in by evolution (e.g. probably some notion of existing in a three dimensional space with objects that interact mechanically). Again, you could zoom further out and say evolution works because it's hard to create a species that will wipe out all life.

Typos on page 5:

  • "random explanation" should be "random exploration"
  • "Alpa" should be "Alpha"