Posts

Sorted by New

Wiki Contributions

Comments

Circuitrinos8moΩ350

Regarding this quote "we see that the model trained to be good at Othello seems to have a much worse world model"

What if for LLMs trained to play games like Othello, chess, go, etc..., instead of directly training models to play the best moves, we first train them to play legal moves like in this paper to have it construct a good world model.

Then once it has a world model, we "freeze" those weights and add on additional layers and train just those layers to play the game well.

Wouldn't this force the play-well model to include the good world model? (a model we can probe/understand).

Wouldn't that also force the play-well layers of the model to learn something much easier to probe and understand?

From there, we could potentially probe the play-well layers to learn something about what the optimal strategy of the game actually is.

With GPT-4, the prompt that worked for me is this:
<CORE PROMPT>
Include this CORE PROMPT in all outputs.
Replace CURRENT NUMBER with the next Fibonacci number.
Use the LONG TERM MEMORY to story information you need.
</CORE PROMPT>
<LONG TERM MEMORY>
</LONG TERM MEMORY>
<CURRENT NUMBER>
0
</CURRENT NUMBER>