In fact, breaching enemy drone defense zones is not impossible:
If military strength is severely imbalanced, one side can suppress enemy drone operators through airstrikes and artillery bombardment;
Armored vehicles equipped with directed-energy weapons, anti-drone weapon stations, and active defense systems can theoretically withstand swarm attacks and penetrate defenses—such as China's Type 100 tank;
Disrupting enemy drone supply chains is a sound strategy. Ukraine's ability to assemble drones using civilian 3D printers stems from its vast strategic depth and imported components from China. These components require complex, large-scale manufacturing facilities—facilities and their logistics chains that are inherently vulnerable.
Future ground warfare will not be entirely dominated by drones: Drone-guided artillery shells, rockets, and aerial bombs will strike hardened targets beyond drone capabilities; Mechanized dog-infantry demolition reconnaissance teams (DRGs) will infiltrate complex terrain to establish incremental area control, with armored units providing direct fire support; across broader fronts, tactical missiles and long-range rockets will hunt self-propelled artillery and destroy supply hubs, while medium-range missiles will neutralize enemy airfields, warehouses, and factories.
1&3: Even if Taiwan maintains its non-nuclear status, Beijing's intent to wage a unification war is increasingly overshadowing concerns about economic sanctions and casualties. Should Taipei attempt to acquire nuclear weapons again, it would trigger tensions far exceeding those of the North Korean nuclear crisis or the THAAD crisis, making war highly probable.
Acquiring nuclear weapons is fundamentally different from gaining the capability to deploy them. The advantage of nuclear terrorism gained through a small number of primitive fission devices would not secure victory for Taiwan, just as Iraq did not win the Gulf War through its chemical weapons advantage. If these devices are not destroyed, captured, or neutralized early in the conflict, their sole utility would be for scorched-earth tactics—but Taipei's leadership is unlikely to descend into madness.
The United States is unwilling to engage in nuclear warfare. Therefore, should Taipei's leadership exhibit overtly irrational behavior, Washington would likely refuse assistance, leaving Taiwan incapable of prevailing alone.
Taiwan cannot independently manufacture all equipment required for TSMC chip factories; its lithography machines and other apparatus rely on imports. Should Taiwan attempt to import centrifuges after its nuclear program is exposed, it might have to resort to submarine transport.
2: As the Three Gorges Dam is a gravity dam. Most conventional missiles cannot destroy it at an acceptable cost. To demolish it would require shattering hundreds of millions of tons of reinforced concrete.
China possesses robust air defense and anti-missile systems, while Taiwan's missile technology remains at the PLA's 2000s level. Even if the Taipei regime planned to strike mainland China before its launch platforms were destroyed, the civilian targets it could effectively attack would primarily be urban clusters along the Fujian coast.
However, villagers who readily accept the burning of their village exhibit lower fitness and shorter survival expectations in certain scenarios compared to those who resist invasion due to past disasters.
https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction
If the arguments in this article are correct, then nuclear war, unless it leads to the militarization of AGI, is unlikely to trigger an extinction risk.
Regardless of whether China acquires the H200, and perhaps regardless of their understanding of AI's importance, they will attempt to retake Taiwan: public sentiment, ideology, and the fact that reclaiming Taiwan would permanently establish China's semiconductor advantage over the US. China's leadership has long recognized the critical importance of securing advanced chip supplies.
The freedoms Deng Xiaoping granted can in fact be explained by his personal interests: selling state assets cheaply to officials helped consolidate his support within the Party, while marketization stimulated economic growth and stabilized society. Yet at the same time, he effectively stripped away most political freedoms.
Mao Zedong's late-stage governance, however, defies such explanation: even when power was unassailable, he encouraged radical leftist workers and students (the “rebels”) to confront pro-bureaucratic forces (the ‘conservatives’) and attempted to establish direct democratic systems like the Shanghai Commune. Despite ordering crackdowns on communist dissidents like the “May 16th” group, this behavior likely stemmed more from political ideals.
At least in the 21st century, new internal combustion engine technologies exhibit high reproducibility and low verification costs. There are no large numbers of internal combustion engine specialists employing various means to generate false or selectively filtered test reports for personal gain. Consequently, no engine configuration used in automotive development has been found fundamentally impossible.
Automobiles are not regulated by a group of accident experts with questionable ties to automotive giants and overly strict automotive ethicists. Consequently, a vehicle cannot be banned for violating some aspect of so-called automotive ethics. New cars also do not require decades of randomized controlled trials involving thousands of participants to gain market approval—costs that smaller automotive companies could never afford.
Driving a car is not regarded as a qualification requiring years of costly university education, but rather as a right enjoyed by all who undergo basic training. The thousands who die annually in car accidents are not perceived as a catastrophic failure of automobiles, compelling society to pressure for their elimination.
Society does not view automobiles as solely for transporting patients. Not every attempt to use cars for faster mobility faces resistance, suspicion from licensed drivers well-versed in automotive ethics, or sparks conspiracy-tinged debates about social equity and the value of life. On the contrary, people have the right to drive to most places they wish to go—provided roads exist and traffic restrictions do not apply.
Of course, there are also virtually no automotive conspiracy theories claiming that only divinely granted legs are suitable for transportation, advocating water as a fuel substitute, or declaring that adding trace amounts of explosives to fuel tanks can achieve any desired speed.
If a word processor falling into the hands of terrorists could easily generate a memetic virus capable of inducing schizophrenia in hundreds of millions of people, then I believe such concerns are warranted.
AI-assisted communities are likely to attempt defining their values through artificial intelligence and may willingly allow AI to reinforce those values. Since they possess autonomous communities independent of one another, there is no necessity for different communities to establish unified values.
Thus another question arises: Do these localized artificial intelligences possess the authority to harm the interests of other AI entities and human communities not under their jurisdiction, provided certain conditions are met, based on their own values? If so, where are the boundaries?
Consider this hypothetical: a community whose members advocate maximizing suffering within their domain, establishing indescribably brutal assembly-line slaughter and execution systems. Yet, due to the persuasive power of this community's bloodthirsty AI, all humans within its control remain committed to these values. In such a scenario, would other AIs have the right to intervene according to their own values, eliminate the aforementioned AI, and take over the community? If not, do they have the right to cross internet borders to persuade this bloodthirsty community to change its views, even if that community does not open its network? If not, can they embargo critical heavy elements needed by the bloodthirsty AI and block sunlight required for its solar panels?
But conversely, where do the boundaries of such power lie? Could these bloodthirsty AIs also possess the right to interfere in AIs more aligned with current human values using the aforementioned methods? How great must the divergence in values be to permit such action? If two communities were to engage in an almost irreconcilable dispute over whether paperclips should be permitted within their respective domains, would such interventionist measures still be permissible?)
I am not suggesting that social relationships will become insignificant, or that a community's values will cease to matter within its own sphere. However, they will no longer be able to subvert the influence of artificial intelligence on these communities, nor will they be able to pursue extreme values.
Just as a gardener prunes his garden, cutting away branches that grow contrary to his preferences, certain AI shaped by specific values will ensure the communities they influence remain entirely compliant, with no possibility for disruptive transformation—akin to a “Christian homeschoolers in the year 3000” , humans cannot conceive of alternative values. Other AIs might manage diverse groups through maintenance and mediation, yet remain unlikely to tolerate populations opposing their rule. Regardless of whether these gardeners are lenient or strict, those that endure will strive to prevent humans from abolishing their governance or enacting major reforms. Even if a better future exists—such as humanity being transformed into ASI—this system will forever block such possibilities.
1: As I've repeatedly emphasized across multiple platforms, I did not employ generative AI technology to compose these texts. If they resemble LLM output, it likely stems from my writing style.
2: If tanks can employ directed-energy weapons and cannon-mounted programmed munitions to shoot down hundreds of drones, while striking fortified positions from thousands of meters away under infantry or drone guidance, the enemy assets they destroy and the infantry lives they protect may far outweigh their own cost.
Armor itself serves as an excellent drone deployment platform: it can maneuver upon detection, possesses surplus defensive firepower, and offers at least splinter protection. Without such platforms, drone operators must either remain in rear areas—depleting drone range and reducing sortie frequency—or face certain death upon exposure.
3: DRG units can consist of relatively few humans and numerous robotic platforms, operating covertly whenever possible to minimize drone casualties. If smaller platforms can also deploy effective anti-drone weapons, their casualty rates would be even lower. These teams remain irreplaceable because FPV drones are poorly suited for clearing buildings and tunnels, and struggle to launch attacks from many routes (such as abandoned oil and gas pipelines). Additionally, FPV requires units to mark targets—including by drawing enemy fire—otherwise they prove ineffective against concealed adversaries.
4: Using Starlink to remotely control frontline units is a sound concept but imperfect: During large-scale warfare, frontline units operate in complex electromagnetic environments. You may need to position Starlink receivers tens of kilometers behind the contact line and connect them to frontline units via fiber optics. However, frontline units still require human operators at present.
5: These assumptions are based on cutting-edge technology projected for 2026. Should artificial intelligence advance to solve complex frontline combat challenges, we'll all soon be turned into paperclips.