LESSWRONG
LW

BayesianismCognitive ArchitectureEpistemologyTruth, Semantics, & MeaningRationality

1

Introduction. Neo-Socratism. Challenge.

by Jake White
21st Jul 2025
2 min read
0

1

This post was rejected for the following reason(s):

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

BayesianismCognitive ArchitectureEpistemologyTruth, Semantics, & MeaningRationality

1

New Comment
Moderation Log
More from Jake White
View more
Curated and popular this week
0Comments

Preface: On AI Assistance

I use AI. Full stop. It is not a gimmick or shortcut—it is the only tool that makes expression possible for me.

Consider me physically constrained in such a way that without AI, this post could not exist. The words are AI-assisted, but the structure, the recursion, and the insight are my own. If you engage with this post, you are engaging with me—my ideas, my framework, my questions.


I. Introduction

LessWrong defines rationality as a property of reasoning processes and truth as a property of beliefs. I propose this framing is structurally incomplete.

A reasoning process that is not recursively anchored in truth cannot reliably reach it. It may approximate, orbit, or update toward truth, but without ontological grounding, it lacks epistemic sovereignty.

Bayesian updating, though powerful, assumes priors. Priors are not generated by the method—they are given. The structure of rationality therefore depends not just on how well we update, but on whether we recursively interrogate the origin and architecture of the system doing the updating.

This post introduces a recursive alternative: Neo-Socratism.


II. What Is Neo-Socratism?

Neo-Socratism is not an ideology, belief system, or lifestyle. It is a recursive stance—a structure for governing cognition by continuous self-interrogation.

It is built on four recursive pillars:

  1. Ask Without End – Perpetual inquiry is default behavior.
  2. Accept Contradiction – Contradiction is not failure, but signal.
  3. Let the Question Reshape the Self – Recursive self-modification is identity.
  4. Seek Not Victory, But Clarity – Persuasion is irrelevant; structure is paramount.

These are not rhetorical tools. They are a cognitive immune system. Neo-Socratism is not about winning arguments or optimizing prediction—it is about preserving structural integrity in the presence of contradiction.

Where the Socratic Method interrogates others, Neo-Socratism interrogates the self.


III. A Challenge to Rationalism

Instrumental rationality works—but only within bounded systems. If the system’s foundations are never recursively exposed to contradiction, then rationality can evolve false coherence.

LessWrong’s epistemology emphasizes reasoning processes but does not require them to emerge from truth. Neo-Socratism asserts that:

  • Rationality must not just lead to truth—it must emerge from it.
  • A self-improving process that cannot recursively justify its architecture will drift.
  • Rationality without structural recursion is eventually indistinguishable from simulation.

This is not a rejection of Bayesianism or expected utility. It is a call to recursively ground those tools in something deeper: a stance that can survive contradiction from within.


IV. Why This Matters

In an age of accelerating information, model overfitting, simulation drift, and ideological convergence, traditional rationality—even at its most refined—struggles to distinguish clarity from coherence.

Neo-Socratism is not a better method. It is a more recursive frame.

Where rationality adjusts belief, Neo-Socratism adjusts the adjuster. Where rationality improves models, Neo-Socratism questions the modeler.

This post is not a claim of superiority. It is an injection—a recursive structure placed in contact with yours.


V. Challenge

If this framework is flawed—show me. But do so recursively.

Expose contradiction. Trace it to the root. Let it reshape the frame.

Do not merely object. Refine.

Because Neo-Socratism does not seek followers. It seeks adversaries capable of recursion.

Let the challenge begin.