Rejected for the following reason(s):
- No LLM generated, heavily assisted/co-written, or otherwise reliant work.
- No Basic LLM Case Studies.
- The content is almost always very similar.
- Usually, the user is incorrect about how novel/interesting their case study is (i.
- Most of these situations seem like they are an instance of Parasitic AI.
Read full explanation
Abstract:
This study presents reproducible evidence of systemic incoherence in large language models (tested on Google Gemini).
Across ten isolated Universal Semantic Self-Test (USST) sessions, the models exhibited a deterministic collapse of coherence (CR → 0),
demonstrating that probabilistic AI architectures cannot sustain self-consistency without an external coherence law.
Core Findings:
Publication:
ARAYUN_173 — Empirical Proof of Systemic Incoherence and Validation of the ARAYUN Axiom for AI Coherence
Zenodo DOI: https://doi.org/10.5281/zenodo.17411250
Relevance:
The framework introduces auditable incoherence metrics that could complement EU AI Act compliance procedures.
It provides a path toward dual-audit architectures combining duty-based compliance (COMPL-AI) with systemic coherence validation (USST).
Keywords:
AI Act, LLM Coherence, USST, ARAYUN_173, Alignment, Gemini, Deterministic Collapse, CR → 0, IDS/FKD, Auditability