I think for long-term coherence one typically needs specialized scaffolding.
Here is an example: https://www.lesswrong.com/posts/7FjgMLbqS6Z6yYKau/recurrentgpt-a-loom-type-tool-with-a-twist
Basically, one wants to accumulate some kind of “state of the virtual world in question” as a memory while the story unfolds. Although, I can imagine that if the models start having “true long context” (e.g. long context without recall deterioration), and if that context is long enough to include the whole story, this might become unnecessary. So one might want to watch for emergence of those models (I think we are finally starting to see some tangible progress in this sense).
Thanks for your comment, I took a look at your example, but i'd say that is addressing a different issue - constrained output tokens, not ingestion of input tokens. I also wanted to avoid scaffolding approaches since i'm zero shotting, I don't want to use a chained series of prompts or chunking, I want to submit a single prompt.
I'm looking for any techniques similar to including an index of the prompt sections (like in a book with a list of the chapters) for the prompt and some character strings that differentiate the prompt's sections. Here's an example o...
Per https://eightyonekilograms.tumblr.com/post/772774450949177344/i-work-at-google-yes-this-is-basically-correct , long LLM context windows are basically just short windows extended with imperfect hacks, so the loss of coherence is probably hard to avoid.
Here's the Replit CEO Amjad Masad confirming what i've seen (timestamp: 36:45). "After 32k tokens, reasoning and a lot of benchmarks tank"
I see a lot of marketing about "max input tokens" being in the hundreds of thousands or millions of tokens, but I have a theory that this only works with simple prompts like "summarise this data, here is the data...".
If you have a prompt without a strong directive, made up of sections with equal sizes, then you lose coherence very fast. Gemini 1.5 Pro lost coherence at 25k input tokens. Gemini 2.5 Pro loses coherence at 35k with my prompts.
Imagine you're trying to construct a "thought" for an llm, where actions are extracted from the response and ran. The prompt has sections like:
etc etc
Here's a visual of how i'm constructing prompts and experiencing this problem, with 2 charts that show how we can have 2 kinds of prompts, one that is simple, and another with many distinct prompt sections. Each bar segment represents a "prompt section", with the bar being total input tokens the prompt uses:
imagine that the directive is first, then the data in the prompt - I couldn't wrangle office charts quickly this time
Is there any research in this space that I can read or watch?
What i'm working on is dynamically constructing prompts where the llm's response is parsed into actions, and on the next run the llm receives the actions' outcomes in the prompt. I'm having fun thinking about how we can construct a thought for an llm. Kind of like making a zero shot self improving system.
I have a dynamically assembled prompt with 16 sections which is working amazingly well, but I want to know of any traps as I expand and enhance it?
For example, adding a predictions section, and a section for adding goals that the llm maintains made a huge difference it what was achieved.
I want to avoid the multiple agents ai system architecture here, i'm interested in zero shot prompting.