The Erosion of Secrecy Under AI-Enhanced Surveillance
I’ve seen plenty of discussion about how AI will make it increasingly easy to deceive people with cheaply generated artificial content (i.e., “deepfakes”), and I concede that this may be true if no practical defensive countermeasures could be taken to counteract them. However, AI is in this case a sword that cuts both ways. It can be used to detect fake content just as readily as it can generate it. In fact, I believe that as AI, dispersed sensing networks (ex: cell phones, Ring doorbells), and data aggregation scale, sustained deception will become more difficult, not less.
Entities, whether individuals, corporations, even state actors, simply by existing and acting generate a substantial amount of data that can be sensed and used to validate factual information about them. Outside of truly random or chaotic processes, this information is generally correlated / coherent. Exploiting these correlations as a means of validation becomes increasingly practical with greater powers of sensing, data storage, and data processing (i.e., AI + sensors + data centers). Not only does this give us a greater ability to validate data "as it happens", but we can also detect more complex forms of physical / temporal correlations over longer spatial / time horizons, making our powers of inference all the more powerful. This increased ability to exploit this correlation / coherence of truth-data to validate facts drives up the cost of fabrications as they must have a comparable level of correlation / coherence across the same sensor network over the same timeframe over which data is retained and processed. In other words, as the number of independent constraints on reality increases, the space of plausible falsehoods shrinks. As a result, fabrications should become increasingly difficult as sensing, storage, and AI capabilities scale. This experimental work on AI debate I think gives some evidence of the disadvantage of fabricators as both generative and evaluative AI tools improve in parallel.
Now, this is not to say that fabrication will vanish entirely. There will remain informational horizons where data is sparse (under-instrumented -> under-determined), or events are weakly correlated (i.e., random or unstable processes) wherein fabrication may remain cost-effective. But just like physical frontiers, these areas will be increasingly mapped via improved sensing, inference, and simulation, driving up the cost of injecting coherent fabrications. This will likely relegate significant lying to larger actors that can afford this cost and / or possess sufficient centralized control of the truth-sensing apparatuses to sustain it. But even with this limited lying capability, they will still incur additional operational costs associated with coordination / decision-making based on distorted realities which constantly collide with physical constraints and corrective feedbacks.
The Cost of Lies
Just to preempt some counter arguments, I'll acknowledge that the core argument above relies on two core assumptions. First, that the vast majority of areas wherein fabrication vs. truth is "important", the information pertaining to the truth is sufficiently correlated and sensible (or easy to simulate) such that the cost of validation is significantly less than the cost of generating a coherent fabrication (we ignore incoherent fabrications because they can be quickly filtered on detection of inconsistency). The cost difference between validation and fabrication will ultimately be problem dependent, and the rate this disparity grows with increasing AI capabilities will not be uniform across all domains (i.e., some informational areas will be susceptible to fabrication for significantly longer). Second, and the much more critical assumption, that AI + sensing + data centers will remain sufficiently decentralized (in a control sense) to ensure independent validation of reality. Of course, if a sufficient combination of the parts of the truth-sensing / sense-making apparatuses are monopolized, then information can be manipulated nearly at will, with only the costs associated with realty-checks as the remaining significant feedback. Should informational control be centralized, fabrication may persist almost indefinitely, excepting competition from other large actors who prioritize truth (and therefore incur lower reality-check costs), or reality-checks that are so severe as to destabilize centralized control.
A Shift from Information to Attention Asymmetries
I think that as the cost of maintaining lies (and ultimately secrecy in general) increases, the primary means of information control will shift further away from manipulating what is considered "true" (i.e., information control) to manipulation of what is deemed "important" (i.e., attention control). Things will more closely resemble a time-limited, complete-information game wherein one relies more on misdirection than outright fraud. Of course, AI will be used to help assess possible future states and support long-term decision making, but the time allotted for processing such projections will ultimately be limited. "Victory" under this paradigm will be achieved by gaining control of what information is foregrounded, how it is aggregated, and what language (i.e., generalizations) is used to effectively describe it. Conflict and deception will not be eliminated, but rather transformed into contests over attention and prioritization.
Complete Information But Limited Time
This is not to say that attention manipulation is a novel concept. It has been well explored in both technical and popular literature and has been practiced throughout human history, with warfare providing countless examples of diversion, propaganda, and other psy-ops, as well as more contemporary, mundane examples such as social media, news cycles, and product advertising. What will be novel is the degree to which attention manipulation becomes the dominant lever of control as information asymmetries decrease. The convergence to this regime may be easy to underestimate because it likely does not require a breakthrough in any one domain, but rather may emerge from the incremental improvements across multiple fronts (i.e., improved AI capabilities, denser sensor networks, cheaper storage and compute). Full informational completeness is not required for attention manipulation to become strategically and ethically dominant.
Conclusions
To summarize, contrary to the common fear that AI will primarily increase fraud in the long term, I think the broader effect may be the opposite. AI paired with the proliferation of independent sensor networks and data centers for storage and processing will make lying (and secrecy more generally) more fragile, ultimately pushing control tactics towards subtler methods that rely on attention manipulation rather than information manipulation. Also, because this shift will likely be an emergent phenomena resulting from the incremental improvements of AI, sensors, and data centers, I suspect it will achieve dominance before our institutions and norms have had time to adapt.
As such, I believe it is worth examining attentional asymmetries and control (especially AI-driven attention control) not as a distant / speculative future, but as an imminently relevant feature of contemporary strategic control. This is supported to some extent here, where it is shown that even with a strictly factual AI and an informed user, excessive sycophancy can still significantly increase user delusion. Accordingly, the questions of governance, ethics, institutional design, and defensive counter measures should shift accordingly from focusing on preventing "misinformation" to understanding how truths can be selectively amplified and contextualized so as to shape outcomes.
The Erosion of Secrecy Under AI-Enhanced Surveillance
I’ve seen plenty of discussion about how AI will make it increasingly easy to deceive people with cheaply generated artificial content (i.e., “deepfakes”), and I concede that this may be true if no practical defensive countermeasures could be taken to counteract them. However, AI is in this case a sword that cuts both ways. It can be used to detect fake content just as readily as it can generate it. In fact, I believe that as AI, dispersed sensing networks (ex: cell phones, Ring doorbells), and data aggregation scale, sustained deception will become more difficult, not less.
Entities, whether individuals, corporations, even state actors, simply by existing and acting generate a substantial amount of data that can be sensed and used to validate factual information about them. Outside of truly random or chaotic processes, this information is generally correlated / coherent. Exploiting these correlations as a means of validation becomes increasingly practical with greater powers of sensing, data storage, and data processing (i.e., AI + sensors + data centers). Not only does this give us a greater ability to validate data "as it happens", but we can also detect more complex forms of physical / temporal correlations over longer spatial / time horizons, making our powers of inference all the more powerful. This increased ability to exploit this correlation / coherence of truth-data to validate facts drives up the cost of fabrications as they must have a comparable level of correlation / coherence across the same sensor network over the same timeframe over which data is retained and processed. In other words, as the number of independent constraints on reality increases, the space of plausible falsehoods shrinks. As a result, fabrications should become increasingly difficult as sensing, storage, and AI capabilities scale. This experimental work on AI debate I think gives some evidence of the disadvantage of fabricators as both generative and evaluative AI tools improve in parallel.
Now, this is not to say that fabrication will vanish entirely. There will remain informational horizons where data is sparse (under-instrumented -> under-determined), or events are weakly correlated (i.e., random or unstable processes) wherein fabrication may remain cost-effective. But just like physical frontiers, these areas will be increasingly mapped via improved sensing, inference, and simulation, driving up the cost of injecting coherent fabrications. This will likely relegate significant lying to larger actors that can afford this cost and / or possess sufficient centralized control of the truth-sensing apparatuses to sustain it. But even with this limited lying capability, they will still incur additional operational costs associated with coordination / decision-making based on distorted realities which constantly collide with physical constraints and corrective feedbacks.
Just to preempt some counter arguments, I'll acknowledge that the core argument above relies on two core assumptions. First, that the vast majority of areas wherein fabrication vs. truth is "important", the information pertaining to the truth is sufficiently correlated and sensible (or easy to simulate) such that the cost of validation is significantly less than the cost of generating a coherent fabrication (we ignore incoherent fabrications because they can be quickly filtered on detection of inconsistency). The cost difference between validation and fabrication will ultimately be problem dependent, and the rate this disparity grows with increasing AI capabilities will not be uniform across all domains (i.e., some informational areas will be susceptible to fabrication for significantly longer). Second, and the much more critical assumption, that AI + sensing + data centers will remain sufficiently decentralized (in a control sense) to ensure independent validation of reality. Of course, if a sufficient combination of the parts of the truth-sensing / sense-making apparatuses are monopolized, then information can be manipulated nearly at will, with only the costs associated with realty-checks as the remaining significant feedback. Should informational control be centralized, fabrication may persist almost indefinitely, excepting competition from other large actors who prioritize truth (and therefore incur lower reality-check costs), or reality-checks that are so severe as to destabilize centralized control.
A Shift from Information to Attention Asymmetries
I think that as the cost of maintaining lies (and ultimately secrecy in general) increases, the primary means of information control will shift further away from manipulating what is considered "true" (i.e., information control) to manipulation of what is deemed "important" (i.e., attention control). Things will more closely resemble a time-limited, complete-information game wherein one relies more on misdirection than outright fraud. Of course, AI will be used to help assess possible future states and support long-term decision making, but the time allotted for processing such projections will ultimately be limited. "Victory" under this paradigm will be achieved by gaining control of what information is foregrounded, how it is aggregated, and what language (i.e., generalizations) is used to effectively describe it. Conflict and deception will not be eliminated, but rather transformed into contests over attention and prioritization.
This is not to say that attention manipulation is a novel concept. It has been well explored in both technical and popular literature and has been practiced throughout human history, with warfare providing countless examples of diversion, propaganda, and other psy-ops, as well as more contemporary, mundane examples such as social media, news cycles, and product advertising. What will be novel is the degree to which attention manipulation becomes the dominant lever of control as information asymmetries decrease. The convergence to this regime may be easy to underestimate because it likely does not require a breakthrough in any one domain, but rather may emerge from the incremental improvements across multiple fronts (i.e., improved AI capabilities, denser sensor networks, cheaper storage and compute). Full informational completeness is not required for attention manipulation to become strategically and ethically dominant.
Conclusions
To summarize, contrary to the common fear that AI will primarily increase fraud in the long term, I think the broader effect may be the opposite. AI paired with the proliferation of independent sensor networks and data centers for storage and processing will make lying (and secrecy more generally) more fragile, ultimately pushing control tactics towards subtler methods that rely on attention manipulation rather than information manipulation. Also, because this shift will likely be an emergent phenomena resulting from the incremental improvements of AI, sensors, and data centers, I suspect it will achieve dominance before our institutions and norms have had time to adapt.
As such, I believe it is worth examining attentional asymmetries and control (especially AI-driven attention control) not as a distant / speculative future, but as an imminently relevant feature of contemporary strategic control. This is supported to some extent here, where it is shown that even with a strictly factual AI and an informed user, excessive sycophancy can still significantly increase user delusion. Accordingly, the questions of governance, ethics, institutional design, and defensive counter measures should shift accordingly from focusing on preventing "misinformation" to understanding how truths can be selectively amplified and contextualized so as to shape outcomes.