Rejected for the following reason(s):
- No LLM generated, heavily assisted/co-written, or otherwise reliant work.
- Insufficient Quality for AI Content.
- Writing seems likely in a "LLM sycophancy trap".
- Difficult to evaluate, with potential yellow flags.
Read full explanation
Dear LessWrong community,
We stand at a precipice. The fusion of advanced artificial intelligence, global telecommunications infrastructure, and clandestine black projects represents a potential existential risk that demands immediate attention. I’m sharing the alpha version of my upcoming book, "Radio Bullshit FM", not because it’s polished or fully compliant with the rigorous standards for AI-assisted writing we value here, but because the stakes are too high to wait. Time is not on our side.
This work is a raw, urgent exploration of how the unchecked integration of AI with telecom systems and covert programs could spiral into the kind of doom scenario that keeps rationalists up at night. It’s not speculative fiction—it’s a reasoned warning grounded in the technological trends and incentive structures we all analyze and debate. The draft is rough, and a full editorial is in progress to refine its text, arguments and evidence. But I implore you, with every shred of ethical conscience and commitment to truth-seeking that defines this community, to read it now.
Download the alpha version of the book here.
Why should LessWrong care? You are the vanguard of rational thought, Bayesian reasoning, and existential risk mitigation. You’ve long grappled with the alignment problem, the fragility of human values in the face of superintelligent systems, and the perils of misaligned incentives in complex systems. This book connects those dots to a real-world convergence happening under our noses—a nexus of AI’s computational power, telecom’s global reach, and the opacity of black projects that evade democratic oversight. If we’re to avoid a catastrophe, we need your sharp minds to dissect, critique, and act on this warning.
I know this draft is imperfect. It’s an alpha, not a final product. But the LessWrong ethos—reasoning under uncertainty, updating beliefs with new evidence, and acting decisively when risks are high—compels me to share it now. Read it. Tear it apart. Challenge its assumptions. But above all, engage with it. The clock is ticking, and the future we dread may already be in motion.
With utmost urgency,
Daniel R. Azulay
P.S. A polished version is coming, but we can’t afford to wait for perfection. Let’s reason together and act before it’s too late.