This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
1.Introduction: A Social Experiment with No Right to Withdraw
By 2026, the tide of autonomous driving has become unstoppable. Entrepreneurs proclaim the arrival of "large-scale trials," declaring machines to be 200 times safer than humans, and move to eliminate steering wheels and brakes. Capital markets are boiling over, and the public, amid dazzling propaganda, gradually accepts a presupposition: that technological progress inevitably leads to a safer future.
But the safety issues of AI have never been truly resolved. We are forcing every ordinary person to participate in a gamble they never consented to.
The stakes are their own lives and physical integrity, and the house is a statistical model with fundamental flaws.
Chapter 1: The Innate Curse of Statistical AI
At the core of all current mainstream autonomous driving systems are statistical models based on empirical risk minimization. These models have a fatal mathematical prerequisite: Training data and the real world must be independent and identically distributed.
However, the very nature of traffic scenarios is open, infinite, and even adversarial.
The Inevitability of Out-of-Distribution Samples: No matter how many billions of miles of data are collected, the real world will always contain scenarios never covered by the training set – a drunk pedestrian climbing over a guardrail, a mattress falling off a truck, road markings blurred by a sudden blizzard. When a model encounters these "out-of-distribution" samples, its output is unpredictable, and the model itself is unaware of its own "ignorance."
The Curse of the Long Tail: Traffic scenarios follow a long-tail distribution. High-frequency scenarios at the head (going straight, stopping at red lights) account for 99% of the data volume, but it's the "Corner Cases" in the tail – those with extremely low probability but infinite variety – that are the true killers. Statistical models can only fit the patterns they have seen; for the "unknown unknowns," they have no answer.
This is not an engineering problem; it's a mathematical impossibility. No matter how much data or how much computing power, as long as the paradigm of statistical learning is not transcended, the long-tail problem will inevitably continue to emerge.
Chapter 2: The Subverted Concept of Safety
Entrepreneurs are keen on using one number to appease the public: "We are 200 times safer than human drivers."
This statement hides three layers of logical traps:
First, the Fraud of the Comparison Baseline. "Human driver" is an average – it includes drunk drivers, fatigued drivers, novices, and road ragers. Comparing machine performance to the worst moments of humanity is a carefully designed fallacy of the base rate. The real comparison should not be with that drunk driver, but with a sober, attentive, and cautious human.
Second, the Game of Denominators. Machines currently operate primarily under optimal road conditions – sunny days, highways, light traffic. Humans cannot choose their driving conditions. Comparing "safety multiples" derived from different denominators lacks rigorous statistical comparability.
Third, the Incommensurability of Accident Types. Human accidents often stem from negligence or violations; machine accidents often stem from a fundamental misunderstanding of the environment – mistaking a white truck for the sky, or an obstacle blocking the road for a passable area. These two types of errors cannot be折算 into the same "safety factor." The types of errors machines make are ones humans have never made, and are also unpredictable to humans.
When entrepreneurs speak of "acceptable risk," they refer to a statistical "reduction in total societal casualties." But for the individual selected by that long-tail event, the risk is 100%. Statistics cannot protect the specific person.
Chapter 3: Removing the Steering Wheel – Severing the Last Anchor
The most radical declaration is undoubtedly "removing the steering wheel and brakes." This is not just an engineering design choice; it's a philosophical declaration: Humans are no longer needed as the last line of defense.
From a human factors engineering perspective, this is extremely dangerous.
The Fatal Trap of the Handover Paradox: As long as humans remain in the loop, the prolonged reliability of the machine will breed over-trust, leading to distraction and complacency. And when that "inevitable" long-tail scenario arrives, it is physiologically impossible for a human to switch from a relaxed state to precise intervention within seconds. This is the "handover paradox" – the system's very design creates accidents.
Loss of the Psychological Anchor: The steering wheel and brakes are not just control tools; they are the final psychological anchor for humans in moments of extreme panic. When the system makes an incomprehensible decision, that steering wheel one can grasp is the passenger's last shred of safety. Removing it is equivalent to declaring: "You have no possibility of intervention, even as you watch disaster unfold before your eyes."
This is a complete transfer of power – from the individual to the algorithm, from humanity to the statistical model. And the prerequisite for accepting this transfer is the absolute reliability of the model. But we know statistical models can never be absolutely reliable.
Chapter 4: The Individual in the Gamble – The Forgotten Perspective
Imagine a driver who, for years, has relied on his own diligence and caution to ensure the safety of himself and his family. He knows the dangers of the road well, and precisely because of this, he remains constantly vigilant.
Now, someone tells him: Entrust your life to a machine; it's safer than the average human.
What does this mean?
It means he gives up the belief that "I can strive to be safe," and instead accepts a "black box" he cannot understand, verify, or influence as the arbiter of his fate. His safety no longer depends on his own diligence, but on how much data this company collected, on a single line of code written by an algorithm engineer on some particular day, on – whether he is unfortunately the "long-tail sample" not covered by the training data.
From a game theory perspective, this is a game with severely asymmetric risk:
The Decision-makers (entrepreneurs) gain markets, valuations, and historical status.
The Beneficiaries (passengers) gain convenience.
But the Risk Bearers are every innocent passenger and pedestrian.
More importantly, those pedestrians never agreed to participate in this gamble. They walk on the sidewalk, obey traffic rules, yet are drawn into a "social experiment" driven by commercial ambitions. They have no right to opt out, no right to be informed, only passive undertake.
Chapter 5: The Power to Prophesy, Not to Prevent
The torrent of capital has already surged forth, and the romantic narrative of technology has captured hearts. Entrepreneurs will not stop due to warnings from academia – in their eyes, we are the "conservatives," the "pessimists hindering progress."
It is not that prophecy can prevent disaster, but that disaster needs to be named, needs to be attributed, needs to be remembered.
When the long-tail problem finally proves itself at the cost of flesh and blood, the public will ask: Why didn't anyone tell us beforehand? Why weren't we aware of the risks?
Then, these words will be the answer. They stand as proof: Some knew, some spoke out long ago, but their voices were drowned out by the noise of capital.
The value of prophecy is not to alter the course of history, but to leave a coordinate for posterity after history has rolled over them, letting them know: Here, someone once tried to awaken the sleepers.
Conclusion: Prudence is Not Conservatism
We are not opposed to technological progress. We oppose using the name of progress to disguise risks; using statistics to comfort individuals; using the future to sacrifice the present.
True technological progress is not achieved by "gambling that the long tail won't appear," but by confronting the long tail, understanding the long tail, and establishing truly reliable mechanisms to deal with the long tail.
Until then, retain the steering wheel, retain humanity's final right to choose, retain the dignity that comes from "I can strive to ensure my own safety" – this is not conservatism, it is the most fundamental respect for every life.
The silent majority in this gamble, the individuals yet to be selected by the long tail – their voices have not yet been heard. May this text serve as their future echo.
1.Introduction: A Social Experiment with No Right to Withdraw
By 2026, the tide of autonomous driving has become unstoppable. Entrepreneurs proclaim the arrival of "large-scale trials," declaring machines to be 200 times safer than humans, and move to eliminate steering wheels and brakes. Capital markets are boiling over, and the public, amid dazzling propaganda, gradually accepts a presupposition: that technological progress inevitably leads to a safer future.
But the safety issues of AI have never been truly resolved. We are forcing every ordinary person to participate in a gamble they never consented to.
The stakes are their own lives and physical integrity, and the house is a statistical model with fundamental flaws.
Chapter 1: The Innate Curse of Statistical AI
At the core of all current mainstream autonomous driving systems are statistical models based on empirical risk minimization. These models have a fatal mathematical prerequisite: Training data and the real world must be independent and identically distributed.
However, the very nature of traffic scenarios is open, infinite, and even adversarial.
The Inevitability of Out-of-Distribution Samples: No matter how many billions of miles of data are collected, the real world will always contain scenarios never covered by the training set – a drunk pedestrian climbing over a guardrail, a mattress falling off a truck, road markings blurred by a sudden blizzard. When a model encounters these "out-of-distribution" samples, its output is unpredictable, and the model itself is unaware of its own "ignorance."
The Curse of the Long Tail: Traffic scenarios follow a long-tail distribution. High-frequency scenarios at the head (going straight, stopping at red lights) account for 99% of the data volume, but it's the "Corner Cases" in the tail – those with extremely low probability but infinite variety – that are the true killers. Statistical models can only fit the patterns they have seen; for the "unknown unknowns," they have no answer.
This is not an engineering problem; it's a mathematical impossibility. No matter how much data or how much computing power, as long as the paradigm of statistical learning is not transcended, the long-tail problem will inevitably continue to emerge.
Chapter 2: The Subverted Concept of Safety
Entrepreneurs are keen on using one number to appease the public: "We are 200 times safer than human drivers."
This statement hides three layers of logical traps:
First, the Fraud of the Comparison Baseline. "Human driver" is an average – it includes drunk drivers, fatigued drivers, novices, and road ragers. Comparing machine performance to the worst moments of humanity is a carefully designed fallacy of the base rate. The real comparison should not be with that drunk driver, but with a sober, attentive, and cautious human.
Second, the Game of Denominators. Machines currently operate primarily under optimal road conditions – sunny days, highways, light traffic. Humans cannot choose their driving conditions. Comparing "safety multiples" derived from different denominators lacks rigorous statistical comparability.
Third, the Incommensurability of Accident Types. Human accidents often stem from negligence or violations; machine accidents often stem from a fundamental misunderstanding of the environment – mistaking a white truck for the sky, or an obstacle blocking the road for a passable area. These two types of errors cannot be折算 into the same "safety factor." The types of errors machines make are ones humans have never made, and are also unpredictable to humans.
When entrepreneurs speak of "acceptable risk," they refer to a statistical "reduction in total societal casualties." But for the individual selected by that long-tail event, the risk is 100%. Statistics cannot protect the specific person.
Chapter 3: Removing the Steering Wheel – Severing the Last Anchor
The most radical declaration is undoubtedly "removing the steering wheel and brakes." This is not just an engineering design choice; it's a philosophical declaration: Humans are no longer needed as the last line of defense.
From a human factors engineering perspective, this is extremely dangerous.
The Fatal Trap of the Handover Paradox: As long as humans remain in the loop, the prolonged reliability of the machine will breed over-trust, leading to distraction and complacency. And when that "inevitable" long-tail scenario arrives, it is physiologically impossible for a human to switch from a relaxed state to precise intervention within seconds. This is the "handover paradox" – the system's very design creates accidents.
Loss of the Psychological Anchor: The steering wheel and brakes are not just control tools; they are the final psychological anchor for humans in moments of extreme panic. When the system makes an incomprehensible decision, that steering wheel one can grasp is the passenger's last shred of safety. Removing it is equivalent to declaring: "You have no possibility of intervention, even as you watch disaster unfold before your eyes."
This is a complete transfer of power – from the individual to the algorithm, from humanity to the statistical model. And the prerequisite for accepting this transfer is the absolute reliability of the model. But we know statistical models can never be absolutely reliable.
Chapter 4: The Individual in the Gamble – The Forgotten Perspective
Imagine a driver who, for years, has relied on his own diligence and caution to ensure the safety of himself and his family. He knows the dangers of the road well, and precisely because of this, he remains constantly vigilant.
Now, someone tells him: Entrust your life to a machine; it's safer than the average human.
What does this mean?
It means he gives up the belief that "I can strive to be safe," and instead accepts a "black box" he cannot understand, verify, or influence as the arbiter of his fate. His safety no longer depends on his own diligence, but on how much data this company collected, on a single line of code written by an algorithm engineer on some particular day, on – whether he is unfortunately the "long-tail sample" not covered by the training data.
From a game theory perspective, this is a game with severely asymmetric risk:
The Decision-makers (entrepreneurs) gain markets, valuations, and historical status.
The Beneficiaries (passengers) gain convenience.
But the Risk Bearers are every innocent passenger and pedestrian.
More importantly, those pedestrians never agreed to participate in this gamble. They walk on the sidewalk, obey traffic rules, yet are drawn into a "social experiment" driven by commercial ambitions. They have no right to opt out, no right to be informed, only passive undertake.
Chapter 5: The Power to Prophesy, Not to Prevent
The torrent of capital has already surged forth, and the romantic narrative of technology has captured hearts. Entrepreneurs will not stop due to warnings from academia – in their eyes, we are the "conservatives," the "pessimists hindering progress."
It is not that prophecy can prevent disaster, but that disaster needs to be named, needs to be attributed, needs to be remembered.
When the long-tail problem finally proves itself at the cost of flesh and blood, the public will ask: Why didn't anyone tell us beforehand? Why weren't we aware of the risks?
Then, these words will be the answer. They stand as proof: Some knew, some spoke out long ago, but their voices were drowned out by the noise of capital.
The value of prophecy is not to alter the course of history, but to leave a coordinate for posterity after history has rolled over them, letting them know: Here, someone once tried to awaken the sleepers.
Conclusion: Prudence is Not Conservatism
We are not opposed to technological progress. We oppose using the name of progress to disguise risks; using statistics to comfort individuals; using the future to sacrifice the present.
True technological progress is not achieved by "gambling that the long tail won't appear," but by confronting the long tail, understanding the long tail, and establishing truly reliable mechanisms to deal with the long tail.
Until then, retain the steering wheel, retain humanity's final right to choose, retain the dignity that comes from "I can strive to ensure my own safety" – this is not conservatism, it is the most fundamental respect for every life.
The silent majority in this gamble, the individuals yet to be selected by the long tail – their voices have not yet been heard. May this text serve as their future echo.