I think there's some communication failure where people are very skeptical of this for reasons that they think are obvious given what they're saying, but which are not obvious to me. Can people tell me which subset of the below claims they agree with, if any? Also if you come up with slight variants that you agree with that would be appreciated.
[EDIT: 8. One should not issue a challenge to know everything a go bot knows without having a good definition of what it means for a go bot to know things.]
An even more pointed example: chess endgame tables. What does it mean to 'fully understand' it beyond understanding the algorithms which construct them, and is it a reasonable goal to attempt to play chess endgames as well as the tables?
This sounds like a great goal, if you mean "know" in a lazy sense; I'm imagining a question-answering system that will correctly explain any game, move, position, or principle as the bot understands it. I don't believe I could know all at once everything that a good bot knows about go. That's too much knowledge.
You have to be able to know literally everything that the best go bot that you have access to knows about go.
In your mind, is this well-defined? Or are you thinking of a major part of the challenge as being to operationalize what this means?
(I don't know what it means.)
I think that it isn't clear what constitutes "fully understanding" an algorithm.
Say you pick something fairly simple, like a floating point squareroot algorithm. What does it take to fully understand that.
You have to know what a squareroot is. Do you have to understand the maths behind Newton raphson iteration if the algorithm uses that? All the mathematical derivations, or just taking it as a mathematical fact that it works. Do you have to understand all the proofs about convergence rates. Or can you just go "yeah, 5 iterations seems to be eno...
I'm a bit confused. What's the difference between "knowing everything that the best go bot knows" and "being able to play an even game against a go bot."? I think they're basically the same. It seems to me that you can't know everything the go bot knows without being able to beat any professional go player.
Or am I missing something?
I think at this point you've pushed the word "know" to a point where it's not very well-defined; I'd encourage you to try to restate the original post while tabooing that word.
This seems particularly valuable because there are some versions of "know" for which the goal of knowing everything a complex model knows seems wildly unmanageable (for example, trying to convert a human athlete's ingrained instincts into a set of propositions). So before people start trying to do what you suggested, it'd be good to explain why it's actually a realistic target.
To me a good definition for this is:
Get to a stage where you can write a computer program which can match the best AI at Go, where the program does no training (or equivalent) and you do no training (or equivalent) in the process of writing the software.
I.E. write a classical computer program that uses the techniques of the Neural Network based program to match it at Go.
I kind of do know everything the best go bot knows? For a given definition of "knows."
At the most simple: I know that the best move to make given a board is the one that leads to a victory board state, or, failing that, a board state with the best chance of leading to a victory board state. Which is all a go progam is doing.
Now, the program is able to evaluate those conditions to a much deeper search depth & breadth than I can, but that isn't a matter of knowledge, just ability-to-implement knowledge.
I wouldn't count the database of prior games as part of the go program, since I (or a different program) could also have access to that same database.
This is an interesting direction to explore but as is I don't have any idea what you mean by understand the go bot and I fear figuring that out would itself require answering more than you want to ask.
For instance, what if I just memorize the source code. I can slowly apply each step on paper and as the adversarial training process has no training data or human expert input if I know the rules of go I can, Chinese room style, fully replicate the best go bot using my knowledge given enough time.
But if that doesn't count and you don't just mean be better th...
On a few different views, understanding the computation done by neural networks is crucial to building neural networks that constitute human-level artificial intelligence that doesn’t destroy all value in the universe. Given that many people are trying to build neural networks that constitute artificial general intelligence, it seems important to understand the computation in cutting-edge neural networks, and we basically do not.
So, how should we go from here to there? One way is to try hard to think about understanding, until you understand understanding well enough to reliably build understandable AGI. But that seems hard and abstract. A better path would be something more concrete.
Therefore, I set this challenge: know everything that the best go bot knows about go. At the moment, the best publicly available bot is KataGo, if you’re at DeepMind or OpenAI and have access to a better go bot, I guess you should use that instead. If you think those bots are too hard to understand, you’re allowed to make your own easier-to-understand bot, as long as it’s the best.
What constitutes success?
Why do I think this is a good challenge?
Corollaries of success (non-exhaustive):
Drawbacks of success:
Related work:
A conversation with Nate Soares on a related topic probably helped inspire this post. Please don’t blame him if it’s dumb tho.