Wiki Contributions

Comments

ReaderM1mo1-2

Not really. The majority of your experiences and interactions are forgotten and discarded, the few that aren't are recalled and triggered by the right input when necessary and not just sitting there in your awareness at all times. Those memories are also modified at every recall.

And that's really just beside the point. However you want to spin it, evaluating that many positions is not necessary for backtracking or playing chess. If that's the base of your "impossible" rhetoric then it's a poor one.

ReaderM1mo12

You can call it a "gut claim" if that makes you feel better.  But the actual reason is I did some very simple math (about the window size required and given quadratic scaling for transformers) and concluded that practically speaking it was impossible.

If you're talking about this:

Now imagine trying to implement a serious backtracking algorithm.  Stockfish checks millions of positions per turn of play.  The attention window for your "backtracking transformer" is going to have to be at lease {size of chess board state}*{number of positions evaluated}.

And because of quadratic attention, training it is going to take on the order of {number or parameters}*({chess board state size}*{number of positions evaluated})^2

then that's just irrelevant. You don't need to evaluate millions of positions to backtrack (unless you think humans don't backtrack) or play chess. 

My point was not that "a relatively simple architecture that contains a Transformer as the core" cannot solve problems via trial and error (in fact I think it's likely such an architecture exists).  My point was that transformers alone cannot do so.

There's nothing the former can do that the latter can't. "architecture" is really overselling it but i couldn't think of a better word. It's just function calling. 

ReaderM1mo12

Have you never figured out something by yourself?  The way I learned to do Sudoku was: I was given a book of Sudoku puzzles and told "have fun".

So few shot + scratchpad ?

I didn't say it was impossible to train an LLM to play Chess. I said it was impossible for an LLM to teach itself to play a game of similar difficulty to chess if that game is not in it's training data.

More gut claims. 

What they do not do is teach themselves things that aren't in their training data via trial-and-error.  Which is the primary way humans learn things

Setting up the architecture that would allow a pretrained LLM to trial and error whatever you want is relatively trivial. Current state of the art isn't that competent but the backbone for this sort of work is there. Sudoku, Game of 24 solve rate is much higher with Tree of thought for instance.  There's stuff for Minecraft too.

ReaderM1mo32

sure.  4000 words (~8000 tokens) to do a 9-state 9-turn game with the entire strategy written out by a human.

Ok? That's how you teach anybody anything. 

Now extrapolate that to chess, go, or any serious game.

LLMs can play chess, poker just fine. gpt 3.5-turbo-instruct plays at about 1800 Elo, consistently making legal moves. - https://github.com/adamkarvonen/chess_gpt_eval

Then there is this grandmaster level chess transformer - https://arxiv.org/abs/2402.04494

Poker - https://arxiv.org/abs/2308.12466

And this doesn't address at all my actual point, which is that Transformers cannot teach themselves to play a game.

Oh so you wrote/can provide a paper proving this or..?

This is kind of the problem with a lot of these discussions. Wild Confidence on ability estimation from what is ultimately just gut feeling. You said GPT-4 couldn't play tic-tac-toe. Well it can. You said it would be impossible to train a chess playing model this century. Already done.

Now you're saying Transformers can't "teach themselves to play a game". There is 0 theoretical justification for that stance.

ReaderM1mo1-2

GPT-4 can play tic-tac-toe 

https://chat.openai.com/share/75758e5e-d228-420f-9138-7bff47f2e12d

ReaderM5mo00

Not sure what you mean by 100 percent accuracy and of course, you probably already know this but 3.5 Instruct Turbo plays chess at about 1800 ELO fulfilling your constraints (and has about 5 illegal moves (potentially less) in 8205) https://github.com/adamkarvonen/chess_gpt_eval

ReaderM5mo10

They can compute a state prior to each generated token and they can choose a token that signal a preservation of this state.

ReaderM5mo30

They had access to and tested the base un-RLHF'd model. Doesn't change much. RLHF has slightly higher misalignment and deception rates(which is a bit notable) but otherwise similar behavior.

ReaderM5mo40

Optimal tic tac toe takes explaining the game in excruciating detail. https://chat.openai.com/share/75758e5e-d228-420f-9138-7bff47f2e12d

ReaderM5mo50

Optimal play requires explaining the game in detail. See here

https://chat.openai.com/share/75758e5e-d228-420f-9138-7bff47f2e12d

Load More