This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Context / Why I’m Posting This on LessWrong
I am a Korean patent attorney. While training junior colleagues, I kept facing the same problem:
They wrote paragraphs, but they didn’t know what their paragraphs were actually doing in the document.
Not “what they contained,” but what cognitive function they performed.
I started to model paragraph writing as a kind of finite-state process, where each paragraph updates the reader’s internal state. This eventually evolved into a general model that I now call DSS (Discourse–Semantic State Model).
I have no academic credentials in linguistics, cognitive science, or formal semantics. This is a conceptual model developed from practical experience + reflective abstraction + help from an LLM.
I’m posting here because LessWrong is one of the few places where conceptual frameworks like this may actually be read, criticized, refined, or repurposed by people with relevant expertise.
If anyone finds this useful — whether for cognitive modeling, writing theory, rational discourse, or AI alignment — feel free to use or improve it.
1. Core Idea of DSS
DSS frames a document as a sequence of state-transition operators, one per paragraph.
A paragraph is not defined by its content, but by:
How it transforms the reader’s internal state.
The internal state consists of two parts:
d = discourse state The set of active discourse commitments: questions being answered, claims being developed, tasks that must be completed by the text.
s = semantic state The set of activated meanings, including emotional tone, interpretive stance, implications, resonance, etc.
Think of it as:
ST = (d, s)
A paragraph p is an operator:
p : ST(i-1) → ST(i)
A full document D is functional composition:
D := pn ∘ pn-1 ∘ ... ∘ p1
ST(n) = D(ST(0))
The document succeeds if the terminal state ST(n) satisfies the intended goals (e.g., “all discourse tasks are completed,” “a specific meaning is delivered,” etc.).
2. Types of Discourse Transitions
In practice, paragraphs do not simply “add information.” They perform specific transformations on (d):
(1) Consumption
A pending discourse task is completed. e.g., A paragraph that resolves: “Explain component A.”
(2) Derivation
A single discourse task splits into more fine-grained tasks. e.g., “Explain A” → “Explain A1, A2, A3.”
(3) Transition
A discourse task mutates into a different form without disappearing. e.g., “Explain the invention” → “Explain its problems,” → “Explain its solution.”
(4) Reactivation
A previously dormant discourse task becomes relevant again.
3. Types of Semantic Transitions
Semantic state (s) can also be transformed:
(1) Tone shifts
activating seriousness, doubt, mystery, hope, tension, etc.
(2) Implication weaving
subtle framing, expectation shaping.
(3) Ambient meaning accumulation
like the residue that literary scenes leave behind.
(4) Emotional modulation
raising or lowering cognitive load, alertness, or affective stance.
This allows DSS to extend beyond technical writing:
Literary analysis
Rhetoric
Persuasion
Essays
Legal argument
Technical documentation
Research exposition
4. Why DSS Feels Useful
A. It explains why some paragraphs “feel wrong.”
Even if content is correct, if no clear transition occurs:
the discourse task is unclear,
too many tasks are introduced,
tasks linger too long, or
semantic tone doesn’t match the discourse stage,
the paragraph feels confusing.
DSS gives a formal vocabulary for this.
B. It gives a mechanical way to diagnose writing structure
Instead of saying “This part is confusing,” you can say:
“You created A1, A2, A3 but only consumed A1.”
“You introduced a major discourse transition without signaling.”
“This paragraph performs no state update.”
“You reactivated a discourse task without support.”
“Semantic load spikes here without discourse justification.”
This is extremely useful for technical writing, legal documents, patents, etc.
C. It unifies many writing tips under one framework
Traditional tips:
“Have one purpose per paragraph.”
“Maintain logical flow.”
“Signal transitions.”
“Manage reader expectations.”
Under DSS, these all translate to:
Ensure that paragraphs perform coherent and controlled state transitions.
5. Minimal Math Summary
This is the entire model in four lines:
Let D = {p1, p2, ..., pn} be paragraphs.
Let ST(i) = (d(i), s(i)) be discourse + semantic state.
Each pi : ST(i-1) → ST(i).
Then the whole document is D = pn ∘ ... ∘ p1.
Everything else is interpretation.
6. Practical Application Example (Patent Specification)
When evaluating a patent draft, DSS allows you to:
Identify where “Explain the invention” should transition into “Explain Problem,” → “Explain Solution,” → “Explain Embodiments.”
Detect paragraphs that:
introduce too many tasks,
leave tasks unconsumed,
reactivate tasks abruptly,
fail to advance the state.
Evaluate section titles as structural cues that modulate reader-state navigation.
This significantly improves clarity and reduces cognitive load.
7. Potential Extensions
(1) Multi-level state modeling
Include section-level structure inside the state representation.
(2) Meta-transitions
Transitions that manage transitions (e.g., “act breaks,” “scene framing”).
(3) Vector-weighted discourse tasks
To model cognitive strain or task salience.
(4) AI alignment research
Modeling how LLMs track discourse state while generating text.
(5) Cognitive load theory integration
Predicting where a reader’s working memory may overload.
8. Why Share This?
I don’t know whether DSS is novel, redundant, trivial, or possibly useful. I’m not trained in cognitive science or formal linguistics. This is simply the best conceptual tool I’ve found for:
explaining why writing feels coherent or not
training junior practitioners
understanding text as a cognitive process
diagnosing “what went wrong” in a document
designing better writing workflows
I’m sharing it here because LessWrong is a rare place where:
conceptual frameworks are welcome
abstractions are a feature, not a bug
someone may recognize a connection to existing theory
someone may want to borrow or formalize it further
If someone with actual expertise wants to refine, criticize, or extend this model, I’d genuinely love to see it happen.
Context / Why I’m Posting This on LessWrong
I am a Korean patent attorney. While training junior colleagues, I kept facing the same problem:
Not “what they contained,” but what cognitive function they performed.
I started to model paragraph writing as a kind of finite-state process, where each paragraph updates the reader’s internal state. This eventually evolved into a general model that I now call DSS (Discourse–Semantic State Model).
I have no academic credentials in linguistics, cognitive science, or formal semantics. This is a conceptual model developed from practical experience + reflective abstraction + help from an LLM.
I’m posting here because LessWrong is one of the few places where conceptual frameworks like this may actually be read, criticized, refined, or repurposed by people with relevant expertise.
If anyone finds this useful — whether for cognitive modeling, writing theory, rational discourse, or AI alignment — feel free to use or improve it.
1. Core Idea of DSS
DSS frames a document as a sequence of state-transition operators, one per paragraph.
A paragraph is not defined by its content, but by:
The internal state consists of two parts:
Think of it as:
A paragraph p is an operator:
A full document D is functional composition:
The document succeeds if the terminal state ST(n) satisfies the intended goals (e.g., “all discourse tasks are completed,” “a specific meaning is delivered,” etc.).
2. Types of Discourse Transitions
In practice, paragraphs do not simply “add information.” They perform specific transformations on (d):
(1) Consumption
A pending discourse task is completed. e.g., A paragraph that resolves: “Explain component A.”
(2) Derivation
A single discourse task splits into more fine-grained tasks. e.g., “Explain A” → “Explain A1, A2, A3.”
(3) Transition
A discourse task mutates into a different form without disappearing. e.g., “Explain the invention” → “Explain its problems,” → “Explain its solution.”
(4) Reactivation
A previously dormant discourse task becomes relevant again.
3. Types of Semantic Transitions
Semantic state (s) can also be transformed:
(1) Tone shifts
activating seriousness, doubt, mystery, hope, tension, etc.
(2) Implication weaving
subtle framing, expectation shaping.
(3) Ambient meaning accumulation
like the residue that literary scenes leave behind.
(4) Emotional modulation
raising or lowering cognitive load, alertness, or affective stance.
This allows DSS to extend beyond technical writing:
4. Why DSS Feels Useful
A. It explains why some paragraphs “feel wrong.”
Even if content is correct, if no clear transition occurs:
the paragraph feels confusing.
DSS gives a formal vocabulary for this.
B. It gives a mechanical way to diagnose writing structure
Instead of saying “This part is confusing,” you can say:
This is extremely useful for technical writing, legal documents, patents, etc.
C. It unifies many writing tips under one framework
Traditional tips:
Under DSS, these all translate to:
5. Minimal Math Summary
This is the entire model in four lines:
Everything else is interpretation.
6. Practical Application Example (Patent Specification)
When evaluating a patent draft, DSS allows you to:
This significantly improves clarity and reduces cognitive load.
7. Potential Extensions
(1) Multi-level state modeling
Include section-level structure inside the state representation.
(2) Meta-transitions
Transitions that manage transitions (e.g., “act breaks,” “scene framing”).
(3) Vector-weighted discourse tasks
To model cognitive strain or task salience.
(4) AI alignment research
Modeling how LLMs track discourse state while generating text.
(5) Cognitive load theory integration
Predicting where a reader’s working memory may overload.
8. Why Share This?
I don’t know whether DSS is novel, redundant, trivial, or possibly useful. I’m not trained in cognitive science or formal linguistics. This is simply the best conceptual tool I’ve found for:
I’m sharing it here because LessWrong is a rare place where:
If someone with actual expertise wants to refine, criticize, or extend this model, I’d genuinely love to see it happen.