No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
TL;DR: Advances in AI, dispersed sensing, and data aggregation are likely to reduce the cost of factual validation to the point that significant sustained lying and secrecy become prohibitively expensive. This will cause a shift in strategy from maintaining informational asymmetry to exploiting attentional asymmetry. If these dynamics are not recognized and addressed early, we risk sleepwalking into a regime where attention-based manipulation largely replaces secrecy and falsehoods without having developed sufficient ethical or institutional safeguards.
The Erosion of Secrecy Under AI-Enhanced Surveillance
The combination of increasingly capable AI systems, widely dispersed and independent sensors, and large-scale data aggregation is steadily eroding the feasibility of sustained secrecy and lies. Individuals, as well as larger entities like states or corporations, implicitly generate enormous volumes of data which can be used to identify and validate factual information pertaining to them. As AI, sensors, data aggregation improves, such data will become more easily analyzed and cross-referenced to the point where it will become increasingly difficult to conceal persistent activities or to maintain false narratives across multiple, correlated data streams.
To be clear, this is not to say secrecy and lies vanish completely (there will still be temporal or physical event horizons beyond which causal reconstruction and cross-validation become too difficult), but much like physical frontiers, these informational frontiers will become increasingly explored via improved sensing, inference, and simulation. For most practical purposes, coherent accounts of events can be reconstructed through cross-correlation even when individual data sources are noisy, incomplete, or potentially fraudulent, whereas attempts to conceal or inject false signals or noise tends to cause decorrelation. Thus, across independent data sources, genuine events tend to leave consistent signatures across many dimensions, providing a structural advantage to validation as opposed to secrecy and fabrication which must remain globally consistent (essentially an N-body problem). This is most succinctly captured in the Sir Walter Scott adage, "Oh what a tangled web we weave, when first we practice to deceive".
While AI systems can be used to generate increasingly sophisticated fabrications, analytic tools and cross-checking mechanisms should scale in parallel, causing the cost of sustaining a coherent falsehood to rise faster than the cost of verification. A similar asymmetry has been observed in experimental work on AI debate, where adversarial argumentation between competing models improves human and model judges’ ability to identify correct answers as debater capability increases, reflecting the greater difficulty of defending a false position under scrutiny. As such, we would expect that as AI capabilities improve that the previously mentioned structural asymmetry of facts over lies and secrecy should increasingly favor verification over fabrication. Large actors may continue to attempt secrecy and falsehoods, but doing so will be a losing battle, not only due to the increasing computational costs of maintaining falsehoods, but also the pervasive knock-on costs encountered as internal decision-making, planning, and coordination are forced to operate on distorted representations of reality, repeatedly colliding with external constraints and corrective physical feedback. Secrecy and lying will increasingly cease to be default tools of control and instead become exceptional, costly, and strategically constrained options.
The Cost of Lies
The Shift from Informational Manipulation to Attentional Manipulation
As secrecy and lying become increasing expensive, the locus of strategic advantage will shift from manipulation of information itself to the manipulation of our limited attention. Even in a world where relevant facts are largely retrievable, attention, decision time, and action bandwidth will remain scarce. Strategic interactions will increasingly resemble a complete-information game under time pressure (like timed chess), rather than an incomplete-information game dominated by hidden knowledge. Ultimately, "victory" will depend on where attention is allocated, which lines of play are considered, and how decisions are made under severe time constraints. AI systems will undoubtedly assist with information retrieval, filtering, summarization, and forecasting; ultimately shaping how this scarcity of attention is managed. However, for the foreseeable future, these systems remain embedded within human institutions and value frameworks. Decisions about what matters which outcomes are prioritized are still grounded in human attention and judgment (at least until AI systems begin to act on independently developed values, a possibility beyond the scope of this discussion).
The board is fully visible, but the clock is limited.
The strategic importance of attention is not new. Contemporary debates already focus on attention in the context of social media, news cycles, and product advertising, while historical warfare provides countless examples of attention manipulation through feints, diversions, propaganda, and psychological operations. What is new is the degree to which attention becomes a dominant lever of control as informational asymmetries collapse. When most actors can, in principle, access the same underlying facts, influence shifts from controlling information to controlling salience, framing, and timing. In this regime, deception no longer primarily consists of secrecy and lies, but rather takes the form of shaping which truths are foregrounded, how they are aggregated, and what generalizations (i.e., languages) are used to describe them. This shift does not eliminate conflict or deception, but rather transforms them into contests over attention, prioritization, and meaning under irreducible cognitive and temporal constraints.
Why This Regime May Arrive Faster Than Expected
Again, none of the individual components discussed here are new in isolation. The erosion of privacy has been explored extensively in both technical and popular literature. The manipulation of attention has long been studied in the contexts of media, advertising, and warfare. What is novel is the convergence of all of these into a coherent operational regime in which secrecy and sustained lying become structurally expensive, while attentional manipulation emerges as the dominant mode of control. This convergence is easy to underestimate because it does not require a breakthrough in any one domain. It arises from incremental improvements across many fronts (i.e., AI systems, denser sensor networks, cheaper storage and compute) allowing for faster cross-correlation and validation.
This regime may arrive faster than expected because it does not require full informational completeness to become ethically and strategically relevant. Even partial erosion of secrecy can produce many of the same dynamics described here. Long before a world of near-total transparency is realized, actors may already find that attention is the binding constraint on action. AI may further accelerate this shift by lowering the cost of attention manipulation at the same time as raising the cost of deception via falsehood. AI systems can generate, tailor, and distribute factually accurate but strategically framed information at scale. In contrast, sustaining coherent fabrications across increasingly rich and correlated data environments demands growing effort and risk. The result will be a quickly widening gap between what is easy to do with technology and what is expensive to maintain against scrutiny.
For these reasons, it is worth examining attentional asymmetry not as a distant / speculative future, but as an emerging condition of contemporary strategic control. If this framing is broadly correct, then questions of governance, ethics, institutional design, defensive counter measures should shift accordingly from preventing misinformation alone, to understanding how truth itself can be selectively amplified and contextualized in ways that meaningfully shape outcomes.
Conclusions
Contrary to common fears that AI will primarily increase falsehoods, its broader effect may be the opposite. When paired with increased sensing and large data infrastructures, AI may make lying and secrecy more fragile, pushing actors towards subtler forms of influence that rely on attention manipulation rather than information manipulation. What is novel is not that attention can be manipulated, but that it may soon become the dominant remaining lever of control (and that this shift may arrive faster than many anticipate). The most significant risks are likely to arise during the transition, as attention-based manipulation becomes widespread before institutions and norms adapt to recognize or constrain it. Understanding attentional asymmetry as a structural feature of this emerging regime may help shift attention (ironically enough) toward the forms of manipulation that matter most going forward.
Ultimately, these arguments rely on at least three dominant assumption. Firstly, that the structural asymmetry between facts and falsehoods continues to widen at scale. This instinctively feels right, but I have no proof that this will be generally the case (this sort of feels similar or adjacent to the P vs. NP problem). Secondly, tools for validation and detection will continue to develop and be deployed alongside tools for fabrication (i.e., we don't destroy our ability to make validation tools due to the proliferation of excessive misinformation). Lastly, sensing, data collection, and analytical capacity (i.e., AI) remain sufficiently dispersed and independent, rather than being fully centralized or monopolized by a few actors such that "facts" can be controlled by some Ministry of Truth. Should any of these assumptions prove wrong, lying and secrecy will be maintained for longer than argued here (possibly indefinitely).
Corollary: Transitional Ethics in the Collapse of Secrecy
Should attentional asymmetry replaces informational asymmetry as secrecy erodes, a distinct set of ethical challenges will emerge during the transition. The collapse of secrecy does not merely alter how power is exercised going forward, but will also it retroactively exposes actions, decisions, and failures that were previously hidden, ambiguous, or effectively unprovable. This exposure will occur unevenly, at different speeds, and often without institutions or moral norms that are prepared to process what is revealed. As a result, societies may find themselves confronting large volumes of uncomfortable truths without shared frameworks for deciding what truths demand attention and action.
One of the most immediate risks in such environments is the strategic misuse of blame. As Sidney Dekker and other safety scholars have argued, blame is rarely about understanding failure but rather about preserving authority and diverting legal or financial liability. Under conditions of heightened visibility, blame can become a default response to newly exposed information, allowing individuals or institutions to signal moral clarity while avoiding deeper structural accountability. These dynamics are nicely summarized as dangers of declining friction. While reduced friction enables rapid correction and broader participation, it also increases susceptibility to manipulation, pile-ons, and competitive moral signaling, especially at the individual scale, where positioning against others can become a means of gaining attention, wealth, or political control. How societies choose to process newly visible truths may ultimately determine whether increased transparency yields resilience or fragmentation. Attending to these questions early may help prevent the tools of attentional influence from becoming instruments of coercion rather than coordination.
When truths becomes visible, blame becomes tempting.
TL;DR: Advances in AI, dispersed sensing, and data aggregation are likely to reduce the cost of factual validation to the point that significant sustained lying and secrecy become prohibitively expensive. This will cause a shift in strategy from maintaining informational asymmetry to exploiting attentional asymmetry. If these dynamics are not recognized and addressed early, we risk sleepwalking into a regime where attention-based manipulation largely replaces secrecy and falsehoods without having developed sufficient ethical or institutional safeguards.
The Erosion of Secrecy Under AI-Enhanced Surveillance
The combination of increasingly capable AI systems, widely dispersed and independent sensors, and large-scale data aggregation is steadily eroding the feasibility of sustained secrecy and lies. Individuals, as well as larger entities like states or corporations, implicitly generate enormous volumes of data which can be used to identify and validate factual information pertaining to them. As AI, sensors, data aggregation improves, such data will become more easily analyzed and cross-referenced to the point where it will become increasingly difficult to conceal persistent activities or to maintain false narratives across multiple, correlated data streams.
To be clear, this is not to say secrecy and lies vanish completely (there will still be temporal or physical event horizons beyond which causal reconstruction and cross-validation become too difficult), but much like physical frontiers, these informational frontiers will become increasingly explored via improved sensing, inference, and simulation. For most practical purposes, coherent accounts of events can be reconstructed through cross-correlation even when individual data sources are noisy, incomplete, or potentially fraudulent, whereas attempts to conceal or inject false signals or noise tends to cause decorrelation. Thus, across independent data sources, genuine events tend to leave consistent signatures across many dimensions, providing a structural advantage to validation as opposed to secrecy and fabrication which must remain globally consistent (essentially an N-body problem). This is most succinctly captured in the Sir Walter Scott adage, "Oh what a tangled web we weave, when first we practice to deceive".
While AI systems can be used to generate increasingly sophisticated fabrications, analytic tools and cross-checking mechanisms should scale in parallel, causing the cost of sustaining a coherent falsehood to rise faster than the cost of verification. A similar asymmetry has been observed in experimental work on AI debate, where adversarial argumentation between competing models improves human and model judges’ ability to identify correct answers as debater capability increases, reflecting the greater difficulty of defending a false position under scrutiny. As such, we would expect that as AI capabilities improve that the previously mentioned structural asymmetry of facts over lies and secrecy should increasingly favor verification over fabrication. Large actors may continue to attempt secrecy and falsehoods, but doing so will be a losing battle, not only due to the increasing computational costs of maintaining falsehoods, but also the pervasive knock-on costs encountered as internal decision-making, planning, and coordination are forced to operate on distorted representations of reality, repeatedly colliding with external constraints and corrective physical feedback. Secrecy and lying will increasingly cease to be default tools of control and instead become exceptional, costly, and strategically constrained options.
The Shift from Informational Manipulation to Attentional Manipulation
As secrecy and lying become increasing expensive, the locus of strategic advantage will shift from manipulation of information itself to the manipulation of our limited attention. Even in a world where relevant facts are largely retrievable, attention, decision time, and action bandwidth will remain scarce. Strategic interactions will increasingly resemble a complete-information game under time pressure (like timed chess), rather than an incomplete-information game dominated by hidden knowledge. Ultimately, "victory" will depend on where attention is allocated, which lines of play are considered, and how decisions are made under severe time constraints. AI systems will undoubtedly assist with information retrieval, filtering, summarization, and forecasting; ultimately shaping how this scarcity of attention is managed. However, for the foreseeable future, these systems remain embedded within human institutions and value frameworks. Decisions about what matters which outcomes are prioritized are still grounded in human attention and judgment (at least until AI systems begin to act on independently developed values, a possibility beyond the scope of this discussion).
The strategic importance of attention is not new. Contemporary debates already focus on attention in the context of social media, news cycles, and product advertising, while historical warfare provides countless examples of attention manipulation through feints, diversions, propaganda, and psychological operations. What is new is the degree to which attention becomes a dominant lever of control as informational asymmetries collapse. When most actors can, in principle, access the same underlying facts, influence shifts from controlling information to controlling salience, framing, and timing. In this regime, deception no longer primarily consists of secrecy and lies, but rather takes the form of shaping which truths are foregrounded, how they are aggregated, and what generalizations (i.e., languages) are used to describe them. This shift does not eliminate conflict or deception, but rather transforms them into contests over attention, prioritization, and meaning under irreducible cognitive and temporal constraints.
Why This Regime May Arrive Faster Than Expected
Again, none of the individual components discussed here are new in isolation. The erosion of privacy has been explored extensively in both technical and popular literature. The manipulation of attention has long been studied in the contexts of media, advertising, and warfare. What is novel is the convergence of all of these into a coherent operational regime in which secrecy and sustained lying become structurally expensive, while attentional manipulation emerges as the dominant mode of control. This convergence is easy to underestimate because it does not require a breakthrough in any one domain. It arises from incremental improvements across many fronts (i.e., AI systems, denser sensor networks, cheaper storage and compute) allowing for faster cross-correlation and validation.
This regime may arrive faster than expected because it does not require full informational completeness to become ethically and strategically relevant. Even partial erosion of secrecy can produce many of the same dynamics described here. Long before a world of near-total transparency is realized, actors may already find that attention is the binding constraint on action. AI may further accelerate this shift by lowering the cost of attention manipulation at the same time as raising the cost of deception via falsehood. AI systems can generate, tailor, and distribute factually accurate but strategically framed information at scale. In contrast, sustaining coherent fabrications across increasingly rich and correlated data environments demands growing effort and risk. The result will be a quickly widening gap between what is easy to do with technology and what is expensive to maintain against scrutiny.
For these reasons, it is worth examining attentional asymmetry not as a distant / speculative future, but as an emerging condition of contemporary strategic control. If this framing is broadly correct, then questions of governance, ethics, institutional design, defensive counter measures should shift accordingly from preventing misinformation alone, to understanding how truth itself can be selectively amplified and contextualized in ways that meaningfully shape outcomes.
Conclusions
Contrary to common fears that AI will primarily increase falsehoods, its broader effect may be the opposite. When paired with increased sensing and large data infrastructures, AI may make lying and secrecy more fragile, pushing actors towards subtler forms of influence that rely on attention manipulation rather than information manipulation. What is novel is not that attention can be manipulated, but that it may soon become the dominant remaining lever of control (and that this shift may arrive faster than many anticipate). The most significant risks are likely to arise during the transition, as attention-based manipulation becomes widespread before institutions and norms adapt to recognize or constrain it. Understanding attentional asymmetry as a structural feature of this emerging regime may help shift attention (ironically enough) toward the forms of manipulation that matter most going forward.
Ultimately, these arguments rely on at least three dominant assumption. Firstly, that the structural asymmetry between facts and falsehoods continues to widen at scale. This instinctively feels right, but I have no proof that this will be generally the case (this sort of feels similar or adjacent to the P vs. NP problem). Secondly, tools for validation and detection will continue to develop and be deployed alongside tools for fabrication (i.e., we don't destroy our ability to make validation tools due to the proliferation of excessive misinformation). Lastly, sensing, data collection, and analytical capacity (i.e., AI) remain sufficiently dispersed and independent, rather than being fully centralized or monopolized by a few actors such that "facts" can be controlled by some Ministry of Truth. Should any of these assumptions prove wrong, lying and secrecy will be maintained for longer than argued here (possibly indefinitely).
Corollary: Transitional Ethics in the Collapse of Secrecy
Should attentional asymmetry replaces informational asymmetry as secrecy erodes, a distinct set of ethical challenges will emerge during the transition. The collapse of secrecy does not merely alter how power is exercised going forward, but will also it retroactively exposes actions, decisions, and failures that were previously hidden, ambiguous, or effectively unprovable. This exposure will occur unevenly, at different speeds, and often without institutions or moral norms that are prepared to process what is revealed. As a result, societies may find themselves confronting large volumes of uncomfortable truths without shared frameworks for deciding what truths demand attention and action.
One of the most immediate risks in such environments is the strategic misuse of blame. As Sidney Dekker and other safety scholars have argued, blame is rarely about understanding failure but rather about preserving authority and diverting legal or financial liability. Under conditions of heightened visibility, blame can become a default response to newly exposed information, allowing individuals or institutions to signal moral clarity while avoiding deeper structural accountability. These dynamics are nicely summarized as dangers of declining friction. While reduced friction enables rapid correction and broader participation, it also increases susceptibility to manipulation, pile-ons, and competitive moral signaling, especially at the individual scale, where positioning against others can become a means of gaining attention, wealth, or political control. How societies choose to process newly visible truths may ultimately determine whether increased transparency yields resilience or fragmentation. Attending to these questions early may help prevent the tools of attentional influence from becoming instruments of coercion rather than coordination.