How’s this for the next narrow-AI challenge: a program that can play the next Magic: The Gathering set without altering its own code.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 10:58 PM

Clairification: The next M:tG set is the new Un-set, built around goofy jokes and tomfoolery. While we don't have a full list of cards, the last such set had things like cards that gained bonuses for doing the hokey pokey. If an AI can figure out how to play that well, then I am much more impressed with it than I am with any other set. Are we counting Unstable, or assuming a more normal expansion?

Either would be impressive. And I agree that Unstable would be even more impressive. (Didn't know the new set is the new Un-set.)

define "altering its own code" very precisely. is it allowed to have internal state on which it branches? what is its code? how much state can this program have? note that there's no clear distinction between code and data in modern computers; there's a weak one in the form of the distinction between the cpu code and the stack/heap, but frequently you're running an interpreter that has interesting code on the stack/heap, and it's pretty easy to blend between these. I would classify neural networks as being initially-random programs implemented in a continuous vm. are they programs that alter "their own" code? I would only say that they alter their own code if metalearning is used.

If that happened, how much close would you think AGI is?