We often assume that if every individual part of a system is defensible, the system as a whole must be sound. This logic fails spectacularly when confronted with emergence, where complex interactions create outcomes far removed from the intent of any single part. We don't realize the danger until it's already here.
While the mechanism isn't a simple linear 'slippery slope,' the dynamic is just as treacherous. It’s a tyranny of small decisions: each interaction seems locally rational and harmless, yet the cumulative result is catastrophic. We discount the accumulating risk. At what point does the accumulation of "okay" inputs generate a "screwed up" reality? How do we evaluate where to place a Schelling Fence—a bright line we agree not to cross—when the landscape itself is shifting? This brings us to a modern, widespread example of this trap: the emergent effects of social media algorithms on our perception of reality.
The difficulty in answering this lies in the fact that many of the most influential forces today are not centralized manipulators, but non-malevolent emergent systems. My canonical example is Instagram (specifically, the Reels/short-form video format).
This is NOT a social media hate post, but rather an attempt to understand the negative emergent characteristics better and spark discussions regarding its solutions.
Let's break down the mechanism.
PS1: This analysis presumes good faith from all actors to demonstrate that the issue persists even without malicious intent. The introduction of malevolent actors would only exacerbate the situation.
PS2: This analysis is true not only for short form content, but rather most social media, however short form content simply makes the issue much worse.
At its core, Instagram is a platform for free expression. It allows millions of creators to share their work, opinions, and daily lives. This is, for the most part, a net positive. It enables connection, creativity, and the free exchange of ideas. Within this framework, even a "hot take" on a political figure or an uneducated opinion is just a form of speech. The platform itself doesn't judge the content; it merely provides the stage.
In an attention economy, the most shocking, emotionally resonant, and polarizing content wins.
Nuance takes time to explain; shock is instantaneous. The algorithm, blind to the truth value of the content, sees the spike in engagement and promotes the shock and the assertion. This creates a massive selection bias where the most extreme or shocking anecdotes are preferentially selected because they garner the most engagement.
Consider the content that often surfaces: videos of random road rages, news anchors delivering hateful commentary (often without evidence), or an influencer giving a confident, uneducated "hot take" on complex geopolitical issues.
Most such posts are what I call "observatory/awareness posts." The idea is to inform about something bad in the world in order to muster social muscle for online activism and/or to spread awareness about the issues. Both of these are actually great causes and advantages of social media!
The fundamental rule governing what you see on social media is simple: The algorithm optimizes for attention.
It observes what you watch, what you linger on, and what you engage with, and then it gives you more of the same. This isn't inherently malicious; it's a logical business metric. If you spend more time on the app, you see more ads. The algorithm equates your attention with your preference, which is a reasonable, if flawed, assumption.
We are familiar with the arguments about how short-form video hijacks dopamine circuits and reduces attention spans. But a deeper, more insidious mechanism concerns me: the systemic way these platforms distort our Bayesian priors.
How do we form beliefs? A useful model is predictive coding (related to the Free Energy Principle). In simple terms, our brain is constantly making predictions about the world and then updating its internal model based on new sensory information. When we encounter new information, we can either:
When scrolling through a feed, we are bombarded with a stream of novel "observatory posts" —these are all data points. Since we can't act on this information directly, we default to the second option: we update our priors. Each "innocent observation" subtly shifts our model of the world.
In the context of Instagram Reels, the velocity and volume of information make critical analysis nearly impossible. The brain is wired to minimize surprise (prediction error) in the most efficient way possible. Engaging critical thought (System 2) is metabolically expensive and slow. In a high-velocity, low-friction environment like Reels, the least expensive way to resolve prediction error is to passively accept the observation and update the prior (System 1)
If you see a viral video of "a Kannada auto driver slapping girl" it is presented as an observation and perhaps a call for activism. Because critical evaluation is too slow, the brain defaults to the efficient path: slightly updating its model of the world to accommodate this new "fact."
This is where the system becomes lethal. The interaction between the attention-optimizing algorithm and the cognitive update mechanism creates a devastating feedback loop.
Let’s trace the cascade using the previous example:
Not only did you yourself create a polarization which perhaps didn't exist (you might now be hostile towards locals in Bangalore!), but also you will now exhibit the Baader-Meinhof Phenomenon or frequency illusion (you start to notice something more frequently after you've become aware of it for the first time), which essentially feeds your confirmation bias
Boy that's a vicious cycle with an innocent, well-meaning start!
You are falling prey to a specific form of availability bias, sometimes related to the Chinese Robber Fallacy.
The Chinese Robber Fallacy (or more generally, base rate neglect) occurs when one focuses on the absolute number of occurrences rather than the percentage or base rate. You have seen six videos of conflict (absolute number), which feels significant. But you have no visibility into the millions of peaceful interactions that occur daily (the base rate). The denominator is just hidden from you, making the numerator seem overwhelmingly important.
Individually, every part of this system seems defensible:
Yet the emergent outcome is profoundly dangerous. The resulting crisis demands intervention, but placing a Schelling Fence requires navigating fundamental constraints where local incentives conflict with global epistemic health.
We face several intractable trade-offs:
These constraints ensure that individual actors are caught in a classic multipolar trap, or Moloch. This is the Molochian Dynamic: If any single platform unilaterally shifts its optimization target away from raw attention, they risk being rapidly outcompeted by those who do not. Escaping this race to the bottom requires solving the underlying coordination problem.
The "innocence" of the parts becomes irrelevant because the system itself incentivizes and demands this behavior. If an emergent system, despite the innocence of its components, predictably leads to the degradation of our ability to perceive reality accurately, where do we draw the line? How do we design systems that introduce necessary friction and epistemic transparency while navigating these constraints? What are your ideas?
(I realise my irony that this post perhaps is yet another humble but hopefully slightly educated "in my opinion" post -_-)
See you in the comments!