When studying the provided 30-page thought-annotated sample, I thought about the <Yo be real> command a little more. In my opinion it should be applied in the training data a little differently than how it's done. Here are my thoughts:
In the sample, there are some places where the authors carefully tried to construct “AI nonsense” that matches what we regularly see in the current tech AI dungeon prompts. The player then responses with “<Yo be real>” plus some explanation on what the AI did wrong.
(obvious example: page 17 in this sample: https:/... (read more)
I find this project very interesting and thought a lot about it in the last 2 weeks. The way I understand the main goal of the project is the following:
providing us (AI researchers) with a model that has an additional output dimension (the "thoughts")
training the model in such a way that this new dimension is semantically linked directly to the primary output dimension (the "prompt")
especially linked in some kind of temporal causality ("early" thoughts producing the prompt), not too close the the primary output (so that it contains semantic meaning that ca
When studying the provided 30-page thought-annotated sample, I thought about the <Yo be real> command a little more. In my opinion it should be applied in the training data a little differently than how it's done. Here are my thoughts:
In the sample, there are some places where the authors carefully tried to construct “AI nonsense” that matches what we regularly see in the current tech AI dungeon prompts. The player then responses with “<Yo be real>” plus some explanation on what the AI did wrong.
(obvious example: page 17 in this sample: https:/... (read more)