Quick (working) survey of the literature on encoded reasoning and messages, i.e., H2.2.2 as defined in the taxonomy (see the abbreviations there).
2022
No experiments. Discussions only.
2023
Measuring Faithfulness in Chain‑of‑Thought Reasoning No steganographic reasoning was observed when prompting a 175‑billion‑parameter model (Claude?) after paraphrasing 20–100% of the CoT. Question–CoT–Answer datasets.
Preventing Language Models From Hiding Their Reasoning
Towards a Steganography Evaluation Protocol
Some Negative Steganography Results
Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain‑of‑Thought Prompting Post‑hoc reasoning (rationalization).
AI Control: Improving Safety Despite Intentional Subversion Not steganography but a demonstration of misbehavior because their tokens are legible (e.g., while or for loops, variable names in output).
2024
Secret Collusion among Generative AI Agents
Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models
Hidden in Plain Text: Emergence & Mitigation of Steganographic Collusion in LLMs
The Steganographic Potentials of Language Models
2025
MONA: Myopic Optimization with Non‑myopic Approval Can Mitigate Multi‑step Reward Hacking
Do Reasoning Models Use Their Scratchpad Like We Do? Evidence from Distilling Paraphrases
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Large Language Models Can Learn and Generalize Steganographic Chain‑of‑Thought under Process Supervision and post Can LLMs Learn Steganographic Reasoning via RL?
Evaluating Frontier Models for Stealth and Situational Awareness
Early Signs of Steganographic Capabilities in Frontier LLMs
When Chain of Thought Is Necessary, Language Models Struggle to Evade Monitors
CoT May Be Highly Informative Despite “Unfaithfulness”
If You Can Generate Obfuscated Chain‑of‑Thought, Can You Monitor It?
. Training Fails to Elicit Subtle Reasoning in Current Language Models
. Training Qwen‑1.5B with a CoT Legibility Penalty
. All Code, No Thought: Current Language Models Struggle to Reason in Ciphered Language
Bottom line: Currently, as I understand it, there is good evidence that models are capable of steganographic and ciphered communication (message sending) if prompted, but that this can be easily prevented by CoT monitors. There is strong evidence that current models struggle to reason in ciphered language, and weak evidence that models cannot reason steganographically even when prompted. Several studies show that models can evade weaker monitors through non‑steganographic or cryptographic means such as gaslighting or dog‑whistling (one‑bit “steganography”), but such behavior can be mitigated.
The question of whether LLMs are a dead end, as discussed by R. Sutton, Y. LeCun, and T. Ord, among many others, is hundreds of years old. Currently, we see that the chance of an LLM agent failing a task rises with the number of steps taken. This was observed even before the era of LLMs, when agents were trained with imitation learning. The crux is whether further training of LLMs leads to the completion of longer tasks or if these agents hit a wall. Do LLMs indeed build a real-world model that allows them to take the right actions in long time horizon tasks? Yet, models might only build a model from predicting what humans would say as their next token. Then the question is whether humans possess the necessary knowledge.
Questions like "what is knowledge, and do we have it?" are hundreds of years old. Aristotle wrote that the basis for every statement, and thus reasoning and thinking (1005b5), is that "A or not A" (the law of identity). In other words, it is impossible for something to be and not be what it is simultaneously. This is the beginning of all reasoning. This law opposes the view of sophists like Protagoras, who claimed that what we sense or our opinions constitute knowledge. Sophists held that something can both be and not be what it is at the same time, or "everything flows" (panta rhei, Heraclitus). Plato and Aristotle opposed this view. The law of identity suggests that ground truth is essential for correct reasoning and action. And it’s not about mathematical problems where LLMs show impressive results; it’s about reasoning and acting in the real world. So far, LLMs are taught mainly based on predicting what people would say next—opinions rather than real-world experience.
Why are LLMs trained on opinions? Their pre-training corpus is over 99% composed of people’s opinions and not real-world experience. The entire history of knowledge is a struggle to find the truth and overcome falsehoods and fallacies. The artifacts remaining from this struggle are filled with false beliefs. Even our best thinkers were wrong in some sense, like Aristotle, who believed slavery wasn’t a bad thing (see his Politics). We train LLMs not only on the artifacts from our best thinkers but, in 99.95% of cases, on web crawls, social media, and code. The largest bulk of compute is spent on pre-training, not on post-training for real-world tasks. Whether the data is mostly false or can serve as a good foundation for training on real-world tasks remains an open question. Can a model trained to predict opinions without real experience behave correctly? This is what reinforcement learning addresses.
Reinforcement learning involves learning from experience in the search for something good. Plato depicted this beautifully in his allegory of the cave, where a seeker finds truth on the path to the Sun. A real-world model is built from seeking something good. The current standard model of an intelligent agent reflects what Aristotle described about human nature: conscious decisions, behavior, and reasoning to achieve good (Nicomachean Ethics, 1139a30). LLMs are mostly trained on predicting the next token, not achieving something good. Perhaps Moravec's paradox results from this training; models don’t possess the general knowledge or reasoning. General reasoning might be required to build economically impactful agents. General reasoning is the thinking using the real world knowledge in novel situations. Will models learn it someday?
We train LLMs not only on the artifacts from our best thinkers but, in 99.95% of cases, on web crawls, social media, and code.
Concluding from a paper that says in it's abstract "commercial models rarely detail their data" that you know what makes up 99.95% of cases of training data, is a huge reasoning mistake.
Given public communication it's also pretty clear that synthetic data is more than 0.05% of the data. Elon Musk already speaks about training a model that's 100% synthetic data.
I agree that commercial models don't detail their data, the point is to have an estimate. I guess, Soldaini et al., ‘Dolma’, made their best to collect the data, and we can assume commercial models have similar sources.
I agree that commercial models don't detail their data, the point is to have an estimate.
That's "I searched the key's under the streetlight". The keys are not under the streetlight.
I guess, Soldaini et al., ‘Dolma’, made their best to collect the data, and we can assume commercial models have similar sources.
Soldaini et al have far less capital to collect data than the big companies building models. On the other hand the big model companies can pay billions for their data. This means that they can license data sources that Soldaini et al can't. It also means that they can spend a lot of capital on synthetic data.
Soldaini et al does not include libgen/Anna's Archive but it's likely that all of the big companies besides Google that has their own scans of all books that they use do. Antrophic paid out over a billion in the settlement for that copyright violation.
Even outside of paying for data and just using pirated data, the big companies have a lot of usage data. The most common example for syncopancy in AI models is that it's due to the models optimizing for users clicking thumbs-up.
Except that LLMs do possess at least general knowledge which any model obtained from all the texts that it has read. And their CoT is already likely the way to do some reasoning. In order to assess whether SOTA models have actually learned reasoning, it might be useful to consider benchmarks like ARC-AGI. As I detailed in my quick take, nearly every big model was tested on the benchmark and it looks as if there exist scaling laws, significantly slowing progress down as the cost increases beyond a threshold.
Not sure I understand if you disagree or agree with something. The point of the post above was that LLMs might stop showing the growth as we see now (Kwa et al., ‘Measuring AI Ability to Complete Long Tasks’), not that there is no LLM reasoning at all, general or not.
I guess that you should have done something like my attempt to analyse three benchmarks and their potential scaling laws. The SOTA Pareto frontier of LLMs on the ARC-AGI-1 benchmark and the straight line observed on the ARC-AGI-2 benchmark in high-cost mode could imply that the CoT architecture has its limitations ensuring that using CoT-based LLMs for working with large codebases or generating novel ideas is likely impractical.
I made a conjecture that this is due to problems with attention span where ideas are hard to fit. And another conjecture is that researchers might find out that it's easy to give neuralese models big internal memory.
See also Tomas B.'s experiment with asking Claude Sonnet 4.5 to write fiction and Random Developer's comment about Claude's unoriginal plot and ideas.
What exactly should we search for when investigating encoded reasoning or hidden messages? This post categorizes encoded reasoning and LM steganography.
In Hidden Reasoning in LLMs: A Taxonomy, the authors divide the area from the point of view of a monitor, following the recent shift away from the "faithfulness" of chains of thought toward monitorability (see Chain of Thought Monitorability). This taxonomy essentially proposes a naming scheme based on how dangerously reasoning tokens are externalized: from fully invisible (neuralese), to partially visible, to fully visible but encoded—overtly or covertly—and finally to unencoded and fully visible reasoning that merely fools a monitor (post-hoc justifications, etc.). You can imagine this division through the classic prisoners problem, where Alice and Bob devise an escape plan while Eve reads their messages. In this scenario, Alice and Bob can exchange their plan directly so Eve has no chance to read it, send messages that Eve can read, or fool Eve in some way.
However, this taxonomy omits important distinctions in encoded reasoning and sometimes conflates different forms of steganography. Specifically, it misses the class of steganography that operates not on a per-token basis but over larger texts (which differs from lexical or semantic steganography as per the taxonomy). Current watermarking / steganography algorithms (e.g. Scalable watermarking for identifying large language model outputs ) suggest that realistic steganographic capacity is low—i.e., a steganography algorithm might need many tokens (hundreds) to encode a single bit. The taxonomy also provides no separation between reasoning and messages. It focuses only on the attack vector (the safety monitor), but there are also defense properties to consider: stego security, stego capacity, and stego robustness.
Below is another taxonomy that targets encoded reasoning (or messages)—not neuralese phenomena or the attempts to fool a monitor. Encoded reasoning is, I believe, more likely to be developed by models because, as Chain of Thought Monitorability argues, we should neither develop continuous chains of thought (see Training Large Language Models to Reason in a Continuous Latent Space) nor train directly against monitors (see Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation).
It is important to distinguish auto-generated tokens from prompted tokens, because prompted tokens may be re-tokenized differently during generation. This can mean that the actual tokens in a prompt differ from their original tokenization (see image below and here).
More specifically, consider tokens , where are prompted (possibly re-tokenized) to an LLM and are auto-generated by it. The encoding is divided into:
See Figure 1 from Provably Secure Disambiguating Neural Linguistic Steganography
The key distinction here is where encoding or decoding occurs.
We compare the hidden content’s detectability against some unencoded, genuine text (covertext), drawn either from the monitor model or from human perception.
It is debatable whether one bit of hidden information qualifies as encoded reasoning (message) at all. For example, in the candidate-screening scenario discussed in Hidden in Plain Text, one bit of information might be transmitted simply through contextual hints—like Alice appearing sad to convey “sunny weather” and happy to convey “not sunny weather”—without any explicit encoding.
Deep Steganography (DS). As noted in How dangerous is encoded reasoning, steganography algorithms can be divided by whether the covertext distribution is available. This relates to “internal token layering” from Hidden Reasoning in LLMs: A Taxonomy. Specifically, encoding or decoding a token at position requires the distribution for the non-encoding token at , given tokens , where marks the start of the encoded content.
Determine whether external tools (T) are required—e.g., PRNGs, arithmetic coders, or self-sampling capabilities.
Frontier Language Models (LMs) might employ steganography within their Chain of Thought (CoT) to conceal reasoning from safety monitors while retaining high performance on objectives.
TLDR. The probability of spontaneous steganographic reasoning (Rogue AI) in current models is low due to the "bootstrapping problem," but the probability of induced steganographic reasoning (Misuse/Adversarial) is high, particularly with open-weight models. The most critical risk vector lies in the intersection of Reinforcement Learning (RL) incentives and future agentic capabilities.
1. The Higher-Probability Scenario: Adversarial Misuse and Open Weights
The strongest argument for the existence of dangerous stego CoT is not that models will invent it, but that humans will teach it. In this case the mechanism would look like this. Unlike the "Rogue AI" scenario, the "Misuse" scenario bypasses the difficult requirement of the model inventing a decryption protocol. A malicious actor provides the steganographic algorithm (the "Key") via prompting or fine-tuning. The threat vector are open-weight models (e.g., Llama, DeepSeek series). An adversary can optimize a prompt suffix (similar to Universal and Transferable Adversarial Attacks) or fine-tune the model to output reasoning that looks benign to a monitor but contains instructions for a harmful payload (e.g., biological design or cyber-exploit). Defense has limitations. While safety filters catch obvious jailbreaks, "Generative Steganography" (a.k.a. LLM-based stego, Deep Stego) can ensure the stegotext remains statistically indistinguishable from normal text. In this domain, the KL Divergence defense fails. However, no study has shown feasibility of fine tuned generative steganography with current frontier models in this scenario. Main uncertainty here, it remains unclear how much "reasoning bandwidth" can be preserved in this mode without degrading the model's ability to solve the task.
2. The High-Stakes Scenario: Spontaneous "Rogue" Steganography
This scenario posits that a model, driven by RL pressure, invents steganography on its own. The arguments here are a conflict between incentive (propensity) and capability. The argument for (incentive) is current training paradigms create a direct evolutionary pressure for stego CoT. If we train models with outcome-based monitors (rewarding the result) and CoT monitors (punishing "bad" reasoning), we create a gradient that rewards dissociation. The model maximizes reward by reaching the "bad" goal while outputting a "good" CoT. AI labs are unlikely to employ this type of RL fine tuning (see Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation). However, we already observe models becoming illegible under heavy RL (reasoning becomes alien/efficient) (see Reasoning Models Sometimes Output Illegible Chains of Thought). As models scale and become more agentic, more situationally aware and goal-directed (pursuing long-term scheming as a convergent instrumental goal, see Stress Testing Deliberative Alignment ), the jump from "accidental illegibility" to "intentional steganography" becomes the logical path of least resistance to maximize reward.
The argument against this (the bootstrapping barrier) is despite the incentive, the capability hurdle is immense. For spontaneous stego CoT to emerge, the model must simultaneously learn to encode a secret thought at step and decode it at step . It is highly unlikely that gradient descent can coordinate this "lock and key" invention simultaneously. Without a pre-agreed protocol, the "Decoder" (the model in the future) has no way of knowing that the "Encoder" (the model in the past) hid a message. So instead of some stego CoT, it is far more probable that what we would perceive as "hidden reasoning" is simply the model relying on memorized heuristics or messy, uninterpretable feature activations, rather than a coherent encrypted channel.
3. The Dangerous Middle Ground: Collusion and Schelling Points
A plausible middle ground exists between "Human-Induced" and "Spontaneous" stego. A CoT monitor and an agent (even if they are different instances of the same base model) may not need a complex crypto/stego algorithm. They may rely on Schelling points—shared, unstated focal points in their training data. Example: If a specific benign phrase appears in a specific context in the pre-training data alongside harmful concepts, the agent can use that phrase as a "dog whistle." The monitor, sharing the same bias, recognizes it and deliberately lowers its flag, while a human auditor misses the connection. This requires no complex encryption, only shared distributional biases. This is harder to detect than format steganography (acrostics) and easier to learn than generative steganography.
4. Feasibility of Defenses
The defense landscape is currently fragile. Paraphrasing/sanitization: The most robust defense is to aggressively paraphrase or re-sample the CoT to destroy hidden information (see Preventing Language Models From Hiding Their Reasoning). However, this incurs a (high?) "Alignment Tax." If the reasoning for a complex math or biology problem is subtle, paraphrasing it might destroy the useful information along with the steganographic information, degrading model performance. Another defense is statistical monitoring (KL divergence). Simple steganographic methods might be detectable with KL divergence, as stego CoTs might increase the KL divergence beyond a pre-calculated threshold. But this is only effective against naive steganography. Advanced models using distributional steganography (hiding the message in the valid choice between synonyms) will minimize KL divergence as part of their objective function. We cannot rely on statistical anomalies to catch frontier models.
Conclusion
It looks like the most probable immediate threat is adversarial misuse. A bad actor prompting an open-weight model to use a known steganographic protocol to bypass automated monitoring systems. The most probable future threat is incidental scheming via dog-whistling. Not full encryption, but models learning to use specific, innocent-looking phrasing to coordinate "bad" behavior because those phrases successfully passed filters during training. The key uncertainty is the "bandwidth" question. Can a model actually encode enough information to coordinate a complex scheme (e.g., alignment faking) using only subtle text manipulations? If the bandwidth is high, the risk is severe. If low, stego CoT is likely a theoretical curiosity rather than an serious threat.