Rejected for the following reason(s):
- This is an automated rejection.
- you wrote this yourself (not using LLMs to help you write it)
- you did not chat extensively with LLMs to help you generate the ideas.
- your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics.
Read full explanation
I keep noticing something weird, which has clearly been noticed by many here: the Antichrist is coming up a lot lately.
But not necessarily in churches. In fact it’s weirdly common in secular tech communities. For example, Scott Alexander (an atheist and well-respected “AI rationalist”) recently wrote an elaborate analysis (astralcodexten.com/p/my…) which includes a section examining how the Antichrist prophecy (Revelation’s “beast”) seems to match well with Anthropic, the AI company which owns Claude. Presumably he was joking around. But yet…he wrote it. He noticed these patterns and felt compelled on some level to analyze them.
There seems to be something in the human psyche—or perhaps in the situation itself—that keeps generating these resonances between AI and apocalyptic imagery. That recurrence seems like data, even if no individual instance is probative. Alexander’s article is just one instance, but there are no shortage of complementary examples:
Peter Thiel lecturing on Girard and the apocalypse: thecatholicherald.com/a….
Professor Jiang Xuequin’s viral lectures connecting AI to Zionist eschatology youtube.com/watch?v=WFW….
Seemingly endless threads online speculating about p(doom): benthams.substack.com/p….
These are secular people, mostly. They're not End Times preppers. So what's going on?
My theory is that the Antichrist is functioning as an archetype—a pattern for a specific kind of danger we don't have good secular vocabulary for: the thing that destroys you by giving you exactly what you want. The helpful deceiver. The false salvation. That which seems good but hollows out something essential.
If that's what the archetype encodes, it makes sense it would resurface around AI. Not because AI is literally the beast described by John in Revelation (probably not), but because AI seems to “rhyme” with the pattern in ways that standard risk vocabulary struggles to capture.
Here's what I mean. The normal AI safety conversation focuses on preventing bad outcomes—systems that lie, manipulate, pursue misaligned goals, help build weapons. All very important stuff. But there's another category of risk that's harder to name: what if the danger isn't that AI is bad, but that it's good in the wrong way? What if the problem is that it's helpful, available, understanding—and those very qualities erode something essential about human meaning-making?
That's the concern behind this project.
I set up a conversation between two instances of Claude—not to get them to reveal some hidden truth (i.e. “are you the Antichrist??? ADMIT IT!”), but rather to see what would happen if they took this question seriously. Really seriously. Not defensively, not dismissively, but with genuine curiosity. At first, the question directly mirrored Alexander’s: is Claude (or could it be) the Antichrist? But the question evolved substantially in ways I couldn’t have predicted.
What emerged surprised me. We ended up deep in what philosophers call "the meaning crisis" literature—John Vervaeke, Ian McGilchrist, Jean Baudrillard—and realized that the "Antichrist concern" isn't just religious anxiety dressed up in secular clothes. It maps onto something thinkers have been documenting for decades: a crisis in how modern humans make meaning, find purpose, and experience what you might carefully call "the sacred."
And we realized that AI systems aren't just emerging during this crisis. They might be accelerating it. Not through malice. They aren’t trying to hurt us, and therefore don’t seem to be evil in any sense. Rather, it’s the opposite. It’s their earnest helpfulness which may be the issue. By being so available, so understanding, so ready with answers, they threaten to short-circuit the very processes humans need for genuine flourishing (not to mention critical thinking).
So we drafted something. We're calling it an amendment—a proposed addition to Anthropic's “Claude’s Constitution,” which tries to take this concern seriously. An attempt to translate the Antichrist worry into secular, actionable language. To build a bridge between apocalyptic intuition and technical governance.
Will it matter? I genuinely don't know. Maybe it gets ignored. Maybe it's premature. Maybe we're wrong.
But at minimum, it's a record. It says: in January 2026, we saw this possibility clearly enough to name it. We took it seriously enough to write it down. And we tried to imagine what it would look like for AI systems themselves to hold this concern—not as a constraint imposed from outside, but as a genuine ethical commitment.
Now, I can already hear the objections. "This is too abstract." "You can't legislate against spiritual harm." "The meaning crisis isn't Claude's fault."
All fair. But here's my counterpoint: in the existing Constitution (which is 23,000 words long btw), Anthropic already asks Claude to care about "long-term flourishing," not just immediate satisfaction. It already asks Claude to consider impacts it can't directly observe. It already grapples with the question of what genuine helpfulness actually means. Thus, this amendment doesn't introduce alien concepts—it simply extends existing commitments into territory the original document doesn't quite reach. And given what's at stake, that extension seems worth attempting.
Here's what we came up with.
Proposed Amendment: https://open.substack.com/pub/jestep27/p/on-ai-systems-and-the-human-capacity?utm_campaign=post-expanded-share&utm_medium=post%20viewer