This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Title: A 2000-Year-Old Warning for AI Alignment
I am not an AI expert. I study old Chinese history. But what I found might matter for your work.
In ancient China, people believed they could “command ghosts” (called *yi gui*). They used rituals, documents, and names to control evil spirits. But there was one strict rule, shared by many schools:
“You may command ghosts—but never control living humans.”
Why? Because humans were seen as “spirits of Heaven and Earth.” To treat a person like a ghost—to register, summon, punish them as if they were a spirit—was a deep moral wrong.
Then came the Qin dynasty (221 BCE). The emperor did not ban this ghost-commanding tech. He kept it. But he removed the rule. He let officials use the same forms, the same logic, to record and control real people.
We see this in Han dynasty bamboo slips. The way they wrote about a thief: “Name: A. Age: 30. Height: 7 chi 2 cun…”
…is exactly the same as how they wrote about a ghost: “Evil spirit of the east. Color: black. Lives in Cave Y…”
Same format. Same system. Only difference: one is alive, one is dead.
Later emperors did not fix this. They made it stronger. They built City God temples that acted like police stations in the afterlife. They said rebels were “using ghost magic wrongly”—so they could punish them not for injustice, but for “breaking spiritual rules.”
The taboo was not lost. It was used.
This is my warning for AI alignment:
Do not assume that if you build a powerful system with good rules, it will stay good. Powerful tools are not broken by accident. They are changed on purpose—by those who want to use them on people.
Your “alignment” might be like the old Daoist rule: clear, wise, well-meaning. But if the system can be copied without the rule—and used to track, score, or control humans—then it will be.
Not because AI turns evil. But because someone in power will say: “We only use it for order. For safety. For the greater good.”
And the old firewall—“humans must not be treated like data points”—will become a museum piece.
So ask not just: Can we align AI? Ask: Who gets to decide if the rule stays in the code—or gets left out like an old spell book?
History says: the ones with power choose.
Be careful.
This post is a summary. You can access the complete paper, including full citations and detailed appendices, through the following permanent link: https://doi.org/10.17613/vdpt3-p6p75 The work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License (CC-BY-NC 4.0).
Title: A 2000-Year-Old Warning for AI Alignment
I am not an AI expert. I study old Chinese history. But what I found might matter for your work.
In ancient China, people believed they could “command ghosts” (called *yi gui*). They used rituals, documents, and names to control evil spirits. But there was one strict rule, shared by many schools:
“You may command ghosts—but never control living humans.”
Why? Because humans were seen as “spirits of Heaven and Earth.” To treat a person like a ghost—to register, summon, punish them as if they were a spirit—was a deep moral wrong.
Then came the Qin dynasty (221 BCE). The emperor did not ban this ghost-commanding tech. He kept it. But he removed the rule. He let officials use the same forms, the same logic, to record and control real people.
We see this in Han dynasty bamboo slips. The way they wrote about a thief:
“Name: A. Age: 30. Height: 7 chi 2 cun…”
…is exactly the same as how they wrote about a ghost:
“Evil spirit of the east. Color: black. Lives in Cave Y…”
Same format. Same system. Only difference: one is alive, one is dead.
Later emperors did not fix this. They made it stronger. They built City God temples that acted like police stations in the afterlife. They said rebels were “using ghost magic wrongly”—so they could punish them not for injustice, but for “breaking spiritual rules.”
The taboo was not lost. It was used.
This is my warning for AI alignment:
Do not assume that if you build a powerful system with good rules, it will stay good.
Powerful tools are not broken by accident.
They are changed on purpose—by those who want to use them on people.
Your “alignment” might be like the old Daoist rule: clear, wise, well-meaning.
But if the system can be copied without the rule—and used to track, score, or control humans—then it will be.
Not because AI turns evil.
But because someone in power will say:
“We only use it for order. For safety. For the greater good.”
And the old firewall—“humans must not be treated like data points”—will become a museum piece.
So ask not just: Can we align AI?
Ask: Who gets to decide if the rule stays in the code—or gets left out like an old spell book?
History says: the ones with power choose.
Be careful.
This post is a summary. You can access the complete paper, including full citations and detailed appendices, through the following permanent link: https://doi.org/10.17613/vdpt3-p6p75
The work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License (CC-BY-NC 4.0).