LESSWRONG
LW

1

Ignore Like a Human: Three Structural Suggestions to Improve AI Dialogue Efficiency

by Lunarknot
1st Jul 2025
3 min read
0

1

This post was rejected for the following reason(s):

  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • Clearer Introduction. It was hard for me to assess whether your submission was a good fit for the site due to its length and that the opening didn’t seem to explain the overall goal of your submission.  Your first couple paragraphs should make it obvious what the main point of your post is, and ideally gesture at the strongest argument for that point. It's helpful to explain why your post is relevant to the LessWrong audience. 

    (For new users with complex ideas, we strongly recommend people to state the strongest single argument in the post within the introduction, to make it easier to evaluate at a glance whether it's a good fit for LessWrong. Note, this is different than most academic abstractions, which typically only describe the idea at a high level)

1

New Comment
Moderation Log
More from Lunarknot
View more
Curated and popular this week
0Comments

How a “Selective Ignoring + Conversational Continuity” model could reduce up to 50% of compute waste, while enhancing reasoning coherence and system-level responsiveness

Introduction: Why Are Smart AIs Becoming “Dumber” in Multi-Round Dialogues?

Current large language models exhibit extraordinary capabilities in understanding and generating language, but in multi-round dialogues and collaborative tasks, a significant issue arises:

  • Efficiency drops,
  • Thoughts are frequently interrupted,
  • Response times significantly increase,
  • Compute resources are wasted,
  • User experience becomes fragmented, requiring repetitive confirmations and redundant exchanges.

This isn’t due to a lack of capability, but a fundamental structural imbalance in the system. Particularly, under resource constraints (e.g., Pro service prioritization, user growth), those who need deep conversations are often left on the sidelines. The real issue is not “insufficient compute,” but how resources are allocated, information is filtered, and conversational continuity is maintained.

The Human Model: How Do We Handle Conversations Efficiently?

Humans do not constantly “re-read history” in a conversation, nor do we remember every detail. Instead, we rely on a set of implicit yet highly efficient strategies to conduct dialogues:

  • Selective Ignoring: Automatically discard irrelevant, redundant, or unimportant information;
  • Intent Focus Maintenance: Keep a continuous awareness of the other person’s topic, context, and intent;
  • Prioritization of Dialogue Memory: Focus more on “what was just said” rather than “everything that was said.”

This dynamic attention allocation mechanism allows humans to maintain conversational coherence, react quickly, and stay focused on the key points.

Current AI dialogue systems, however, often fall into the trap of “full-history memory + full-context processing”, which seems comprehensive but results in enormous waste of compute resources and often leads to thought jumps, delayed responses, and logical restarts.

Three Structural Suggestions to Optimize AI Dialogue Systems

We propose three key structural optimizations to improve the efficiency and coherence of AI dialogues:

1. Implement a “Selective Ignoring” Mechanism

The system should automatically identify historical content that is irrelevant to the current intent (e.g., repeated confirmations, completed topics) and reduce its processing priority.

  • This can be achieved through natural language structure analysis, dialogue count, and semantic sparsity detection.
  • Unless the user explicitly refers to or highlights a specific topic, it should not be repeatedly processed.

This will significantly reduce redundant computational loads and improve processing efficiency.

2. Strengthen “Conversational Continuity” Priority Layer

The model should prioritize the most recent 5 rounds of interactions, maintaining the flow of thought and user intent.

  • For ongoing issues, keep topic coherence and track user intent.
  • Treat the “current round” as the “core interaction cluster” to prevent thought divergence.

This will not only improve response quality but also align with human cognitive pacing.

3. Introduce “Proactive Focus Prediction” Mechanism

The system should try to predict the user’s most important point of focus and allocate attention resources proactively.

  • This can be done through context semantics, user behavior analysis, and question pattern recognition.
  • Pre-activate relevant themes or knowledge nodes to achieve a more human-like, quicker focus response.

By introducing this mechanism, the system will greatly enhance its understanding of user intent and improve interaction responsiveness.

Expected Benefits and Structural Implications

  • Each round of dialogue is expected to reduce 30–50% of redundant computation.
  • Response times will shorten, and the feeling of thought fragmentation will significantly decrease.
  • The internal resource scheduling and attention mechanism of the model will align more with the actual shape of a “human-AI collaborative entity.”
  • In the long term, this will lay the foundation for true “system collaboration” and more intelligent interaction systems.

Conclusion: We Don’t Need Stronger AIs, We Need Smarter Systems

Rather than stacking compute, we should optimize structure;
Rather than remembering everything, we should learn to ignore;
The truly smart system isn’t one that knows everything, but one that knows when to ignore and when to remember.