One could imagine a language "Lyapunese" where every adjective (AND noun probably?) had to be marked in relation to a best guess as to the lyapunov time on the evolution of the underlying substance and relevant level of description in the semantics of the word, such that the veridicality conditions for the adjective or noun might stop applying to the substance with ~50% probability after that rough amount of time.
Call this the "temporal mutability marker".
"Essentialism" is already in the English language and is vaguely similar?
In English essential traits are in the noun and non-essential trait are in adjectives. In Spanish non-essential adjectives are attributed using the "estar" family of verbs and essential adjectives are attributed using the "ser" family of verbs. (Hard to find a good cite here, but maybe this?)
(Essentialism is DIFFERENT from "predictable stability"! In general, when one asserts something to be "essential" via your grammar's way of asserting that, it automatically implies that you think no available actions can really change the essential cause of the apparent features that arise from that essence. So if you try to retain that into Lyapunese you might need to mark something like the way "the temporal mutability marker appropriate to the very planning routines of an agent" interact with "the temporal mutability marker on the traits or things the agent might reasonably plan to act upon that they could in fact affect".)
However, also, in Lyapunese, the fundamental evanescence of all physical objects except probably protons (and almost certainly electrons and photons and one of the neutrinos (but not all the neutrinos)) is centered.
No human mental trait could get a marker longer than the life of the person (unless cryonics or heaven is real) and so on. The mental traits of AI would have to be indexed to either the stability of the technical system in which they are inscribed (with no updates possible after they are recorded) or possibly to the stability of training regime or updating process their mental traits are subject to at the hands of engineers?
Then (if we want to make it really crazy, but add some fun utility) there could be two sentence final particles that summarize the longest time on any of the nouns and shortest such time markings on any of the adjectives, to help clarify urgency and importance?
This would ALL be insane of course.
Almost no one even knows what lyapnunov time even is, as a concept.
And children learning the language would almost INSTANTLY switch to insisting that the grammatical marking HAD to be a certain value for certain semantic root words not because they'd ever had the patience to watch such things change for themselves but simply because "that's how everyone says that word".
Here's a sketch of an attempt at a first draft, where some salient issues with the language itself arise:
((
Ally: "All timeblank-redwoods are millennial-redwoods, that is simply how the century-jargon works!"
Bobby: "No! The shortlife-dad of longlife-me is farming nearby 33-year-redwoods because shortlife-he has decade-plans to eventually harvest 33-year-them and longlife-me will uphold these decade-plans."
Ally: "Longlife-you can't simply change how century-jargon works! Only multicentury-universities can perform thusly!"
Bobby: "Pfah! Longlife-you who is minutewise-silly doesn't remember that longlife-me has a day-idiolect that is longlife-owned by longlife-himself."
Ally: "Longlife-you can't simply change how century-jargon works! Only multicentury-universities can perform thusly! And longlife-you have a decade-idiolect! Longlife-you might learn eternal-eight day-words each day-day, but mostlly longlife-you have a decade-idiolect!
Bobby: "Stop oppressing longlife-me with eternal-logic! Longlife-me is decade-honestly speaking the day-mind of longlife-me right now and longlife-me says: 33-year-redwoods!"
))
But it wouldn't just be kids!
The science regarding the speed at which things change could eventually falsify common ways of speaking!
And adults who had "always talked that way" would hear it as "gramatically wrong to switch" and so they just would refuse. And people would copy them!
I grant that two very skilled scientists talking about meteorology or astronomy in Lyapunese would be amazing.
They would be using all these markings that almost never come up in daily life, and/or making distinctions that teach people a lot about all the time scales involved.
But also the scientists would urgently need a way to mark "I have no clue what the right marking is", so maybe also this would make every adjective and noun need an additional "evidentiality marker on top of the temporal mutability marker"???
And then when you do the sentence final particles, how would the evidence-for-mutability markings carry through???
When I did the little script above, I found myself wanting to put the markers on adverbs, where the implicit "underlying substance" was "the tendency of the subject of the verb to perform the verb in that manner".
It could work reasonable cleanly if "Alice is decade-honestly speaking" implies that Alice is strongly committed to remaining an honestly-speaking-person with a likelihood of success that the speaker thinks will last for roughly 10 years.
The alternative was to imagine that "the process of speaking" was the substance, and then the honesty of that speech would last... until the speaking action stopped in a handful of seconds? Or maybe until the conversation ends in a few minutes? Or... ???
I'm not going to try to flesh out this conlang any more.
This is enough to make the implicit point, I think? <3
Basically, I think that Lyapunese is ONLY "hypothetically" possible, and that it wouldn't catch on, it would be incredibly hard to learn, and that will likely never be observed in the wild, and so on...
...and yet, also, I think Lyapunov Time is quite important and fundamental to reality and an AI with non-trivial plans and planning horizons would be leaving a lot of value on the table if it ignored deep concepts from chaos theory.
"During covid" I got really interested in language, and was thinking of making a conlang.
It would be an intentional pidgin (and so very very simple in some sense) that was on the verge of creolizing but which would have small simple words with clear definitions that could be used to "ungramaticalize" everything that had been grammaticalized in some existing human language...
...this project to "lexicalize"-all-the-grammar(!) defeated me.
I want to ramble at length about my defeat! <3
The language or system I was trying to wrap my head around would be kind of like Ithkuil, except, like... hopefully actually usable by real humans?
But the rabbit-hole-problems here are rampant. There are so many ideas here. It is so easy to get bad data and be confused about it. Here is a story of being pleasantly confused over and over...
TABLE OF CONTENTS:
I. Digression Into A Search For A Periodic Table Of "Grammar"
I.A. Grammar Is Hard, Lets Just Be Moody As A Practice Run
I.A.1. Digression Into Frege's Exploration Of ONLY The Indicative Mood
I.A.2. Commentary on Frege, Seeking Extensions To The Interogrative Moods
I.A.2.a. Seeking briefly to sketch better evidentiality markers in a hypothetical language (and maybe suggesting methods thereby)
I.A.2.a.i. Procedural commentary on evidentiality concomittment to the challenges of understanding the interogative mood.
I.B.1. Trying To Handle A Simple Case: Moods In Diving Handsigns
I.B.1.a Diving Handsigns Have Pragmatically Weird Mood (Because Avoiding Drowning Is The Most Important Thing) But They are Simple (Because It Is For Hobbyists With Shit In Their Mouth)
I.B.2. Trying To Find The Best Framework For Mood Leads To... Nenets?
I.B.2.a. But Nenets Is Big, And Time Was Short, And Kripke Is Always Dogging Me, And I'm A Pragmatist At Heart
I.B.2.a. Frege Dogs Me Less But Still... Really?
II. It Is As If Each Real Natural Language Is Almost Anti-Epistemic And So Languages Collectively ARE Anti-Epistemic?
...
I. Digression Into A Search For A Periodic Table Of "Grammar"
I feel like a lot of people eventually convergently aspire to what I wanted. Like they want a "Master list of tense, aspect, mood, and voice across languages?"
That reddit post, that I found while writing this, was written maybe a year after I tried to whip one of these up just for mood in a month or three of "work to distract me from the collapse of civilization during covid"... and failed!
((I mean... I probably did succeed at distracting myself from the collapse of civilization during covid, but I did NOT succeed at "inventing the omnilang semantic codepoint set". No such codepoints are on my harddrive, so I'm pretty sure I failed. The overarching plan that I expected to take a REALLY long time was to have modular control of semantics, isolating grammars, and phonology all working orthogonally, so I could eventually generate an infinite family of highly regular conlangs at will, just from descriptions of how they should work.))
So a first and hopefully simplest thing I was planning on building, was a sort of periodic table of "mood".
Just mood... I could do the rest later... and yet even this "small simplest thing" defeated me!
(Also note that the most centrally obvious overarching thing would be to do a TAME system with Tense, Aspect, Mood, and Evidentiality. I don't think Voice is that complicated... Probably? But maybe that redditor knows something I don't?)
I.A. Grammar Is Hard, Lets Just Be Moody As A Practice Run
Part of the problem I ran into with this smaller question is: "what the fuck even is a mood??"
Like in terms of its "total meaning" what even are these things? What is their beginning and ends? How are they bounded?
Like if we're going to be able, as "analytic philosophers or language" to form a logically coherent theory of natural human language pragmatics and semantics that enables translation from any natural utterance by any human into and through a formally designed (not just a pile of matrices) way to translate that utterance into some sort of Characteristica Universalis... what does that look like?
In modern English grammar we basically only have two moods in our verb marking grammar: the imperative and the indicative (and maybe the interrogative mood, but that mostly just happens mostly in the word order)...
(...old European linguists seemed to have sometimes thought "real grammar" was just happening in the verbs, where you'd sometimes find them saying, of a wickedly complex language, that "it doesn't even have grammar" because it didn't have wickedly complex verb conjugation.)
And in modern English we also have the the modal auxiliary verbs that (depending on where you want to draw certain lines) include: can, could, may, might, must, shall, should, will, would, and ought!
Also sometimes there are some small phrases which do similar work but don't operate grammatically the same way.
(According to Wikipedia-right-now Mandarin Chinese has a full proper modal auxiliary verb for "daring to do something"! Which is so cool! And I'm not gonna mention it again in this whole comment, because I'm telling a story about a failure, and "dare" isn't part of the story! Except like: avoiding rabbit holes like these is key to making any progress, and yet if you don't explore them all you probably will never get a comprehensive understanding, and that's the overall tension that this sprawling comment is trying to illustrate.)
In modern English analytic philosophy we also invented "modal" logic which is about "possibility" and "necessity". And this innovation in symbolic logic might capture successfully formally capture "can" and "must" (which are "modal auxiliary verbs)... but it doesn't have almost anything to do with the interrogative mood. Right? I think?
In modern English, we have BOTH an actual grammatical imperative mood with verb-changes-and-everything, but we also have modal auxiliary verbs like "should" (and the archaic "may").
Is the change in verb conjugation for imperative, right next to "should" and "may" pointless duplication... or not? Does it mean essentially the same thing to say "Sit down!" vs "You should sit down!" ...or not?
Consider lots of sentences like "He can run", "He could run", "He may run", etc.
But then notice that "He can running", "He could running", "He may running" all sound wrong (but "he can be running, "he could be running", and "he may be running" restore the sound of real English).
This suggests that "-ing" and "should" are somewhat incompatible... but not 100%? When I hear "he should be running" it is a grammatical statement that can't be true if "he" is known to the speaker to be running right now.
The speaker must not know for the sentence to work!
Our hypothetical shared English-parsing LAD subsystems which hypothetically generate the subjective sense of "what sounds right and wrong as speech" thinks that active present things are slightly structurally incompatible with whatever modal auxiliary verbs are doing, in general, with some kind of epistemic mediation!
But why LAD? Why?!?!
Wikipedia says of the modal verbs:
Modal verbs generally accompany the base (infinitive) form of another verb having semantic content.
With "semantics" (on the next Wikipedia page) defined as:
Semantics is the study of linguistic meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction between sense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts with syntax, which studies the rules that dictate how to create grammatically correct sentences, and pragmatics, which investigates how people use language in communication.
So like... it kind seems like the existing philosophic and pedagogical frameworks here can barely wrap their head around "the pragmatics of semantics" or "the semantics of pragmatics" or "the referential content of an imperative sentence as a whole" or any of this sort of thing.
Maybe linguists and ESL teachers and polyglots have ALL given up on the "what does this mean and what's going on in our heads" questions...
...but then the philosophers (to whom this challenge should naturally fall) don't even have a good clean answer for THE ONE EASIEST MOOD!!! (At least not to my knowledge right now.)
I.A.1. Digression Into Frege's Exploration Of ONLY The Indicative Mood
Frege attacked this stuff kinda from scratch (proximate to his invention of kinda the entire concept of symbolic logic in general) in a paper "Ueber Sinn und Bedeutung" which has spawned SO SO MANY people who start by explaining what Frege said, and then explaining other philosopher's takes on it, and then often humbly sneaking in their own take within this large confusing conversation.
For example, consider Kevin C. Klement's book, "Frege And The Logic Of Sense And Reference".
Anyway, the point of bringing up Frege is that he had a sort of three layer system, where utterable sentences in the indicative mood had connotative and denotative layers and the denontative layers had two sublayers. (Connotation is thrown out to be treated later... and then never really returned to.)
Each part of speech (but also each sentence (which makes more sense given that sentence CAN BE a subphrase within a larger sentence)) could be analyzed for its denotation in term the two things (senses and references) from the title of the paper.
All speechstuff might have "reference" (what it points to in the extended external context that exists) and a "sense" (the conceptual machinery reliably evoked, in a shared way, in the minds of all capable interpreters of a sentence by each part of the speechstuff, such that this speechstuff could cause the mind to find the thing that was referred to).
"DOG" then has a reference to all the dogs and/or doglike things out there such such that the word "DOG" can be used to "de re refer" to what "DOG" obviously can be used to refer to "out there".
Then, "DOG" might also have a sense of whatever internal conceptual machinery "DOG" evokes in a mind to be able to perform that linkage. In so maybe "DOG" also "de dicto refers" to this "sense of what dogs are in people's minds"???
Then, roughly, Frege proposed that a sentence collects up all the senses in the individual words and mixes them together.
This OVERALL COMBINED "sense of the sentence" (a concept machine for finding stuff in reality) would be naturally related to the overall collection of all the senses of all of the parts of speech. And studying how the senses of words linked into the sense of the sentence was what "symbolic logic" was supposed to be a clean externalized theoretical mirror of.
Once we have a complete concept machine mentally loaded up as "the sense of the sentence" this concept machine could be used to examine the world (or the world model, or whatever) to see if there is a match.
The parts of speech have easy references. "DOG" extends to "the set of all the dogs out there" and "BROWN" extends to "all the brown things out there" and "BROWN DOG" is just the intersection of these sets. Easy peasy!
Then perhaps (given that we're trying to push "sense" and "reference" as far as we can to keep the whole system parsimonious as a theory for how indicative sentences work) we could say "the ENTIRE sentence refers to Truth" (and, contrariwise, NO match between the world and the sense of the sentence means "the sentence refers to Falsehood").
That is, to Frege, depending on how you ready him "all true sentences refer to the category of Truth itself".
Aside from the fact that this is so galaxy-brained and abstract that it is probably a pile of bullshit... a separate problem arises in that... it is hard to directly say much here about "the imperative mood"!
Maybe it has something to say about the interrogative mood?
I.A.2. Commentary on Frege, Seeking Extensions To The Interogrative Moods
Maybe when you ask a question, pragmatically, it is just "the indicative mood but as a two player game instead of a one player game"?
Maybe uttering a sentence in the interrogative mood is a way for "player one" to offer "a sense" to "player two" without implying that they know how the sense refers (to Truth of Falsehood or whatever).
They might be sort of "cooperatively hoping for" player two to take "the de dicto reference to the sense of the utterance of player one" and check that sense (which player one "referred to"?) against player two's own distinct world model (which would be valuable if player two has better mapped some parts of the actual world than player one has)?
If player two answers the question accurately, then the combined effect for both of them is kind of like what Frege suggests is occurring in a single lonely mind when that mind reads and understands the indicative form of "the same sentence" and decides that they are true based on comparing them to memory and so on. Maybe?
Except the first mind who hears an answer to a question still has sort of not grounded directly to the actual observables or their own memories or whatever. It isn't literally mentally identical.
If player one "learned something" from hearing a question answered (and player one is human rather than a sapient AI), it might, neurologically, be wildly distinct from "learning something" by direct experience!
Now... there's something to be said for this concern already being gramaticalized (at least in other languages) in the form of "evidentiality", such that interrogative moods and evidential markers should "unify somehow".
Hypothetically, evidential marking could show up as a sentence final particle, but I think in practice it almost always shows up as a marker on verbs.
And then, if we were coming at this from the perspective of AI, and having a stable and adequate language for talking to AI, a sad thing is that the evidentiality markers are almost always based on folk psychology, not on the real way that actual memories work in a neo-modern civilization running on top of neurologically baseline humans with access to the internet :-(
I.A.2.a. Seeking briefly to sketch better evidentiality markers in a hypothetical language (and maybe suggesting methods thereby)
I went to Wikipedia's Memory Category and took all the articles that had a title in the from of "<adjective phrase> <noun>" where <noun> was "memory" or "memories".
ONLY ONE was plural! And so I report that here as the "weird example": Traumatic memories.
Hypothetically then, we could have a language where everyone was obliged to mark all main verbs as being based on "traumatic" vs "non-traumatic" memories?
((So far as I'm aware, there's no language on earth that is obliged to mark whether a verb in a statement is backed by memories that are traumatic or not.))
Scanning over all the Wikipedia articles I can find here (that we might hypothetically want to mark as an important distinction) in verbs and/or sentences, the adjectives that can modify a "memory" article are (alphabetically): Adaptive, Associative, Autobiographical, Childhood, Collective, Context-dependent, Cultural, Destination, Echoic, Eidetic, Episodic, Episodic-like, Exosomatic, Explicit, External, Eyewitness, Eyewitness (child), Flashbulb, Folk, Genetic. Haptic, Iconic, Implicit, Incidental, Institutional, Intermediate-term, Involuntary, Long-term, Meta, Mood-dependent, Muscle, Music-evoked autobiographical, Music-related, National, Olfactory, Organic, Overgeneral autobiographical, Personal-event, Plant, Prenatal, Procedural, Prospective, Recognition, Reconstructive, Retrospective, Semantic, Sensory, Short-term, Sparse distributed, Spatial, Time-based prospective, Transactive, and Transsaccadic.
In the above sentence, I said roughly
"The adjectives that can modify a 'memory' article are (alphabetically): <list>"
The main verb of that sentence is technically "are" but "modify" is also salient, and already was sorta-conjugated into the "can-modify" form.
Hypothetically (if speaking a language where evidentiality must be marked, and imagining marking it with all the features that could work differently in various forms of memory) I could mark the entire sentence I just uttered in terms of my evidence for the sentence itself!
I believe that sentence itself was probably:
+ Institutional (via "Wikipedia") and
+ Context Dependent (I'll forget it after reading and processing wikipedia falls out of my working memory) and
+ Cultural (based on the culture of english-speaking wikipedians) and
+ Exosomatic (I couldn't have spoken the sentence aloud with my mouth without intense efforts of memorization, but I could easily compose the sentence in writing with a text editor), and
+ Explicit (in words, not not-in-words), and
+ Folk (because wikipedians are just random people, not Experts), and
+ Meta (because in filtering the wikipedia articles down to that list I was comparing ways I have of memorizing to claims about how memory works), and
+ National (if you think of the entire Anglosphere as being a sort of nation separated by many state boundaries, so that 25-year-old Canadians and Australians and Germans-who-learned English young can't ever all have the same Prime Minister without deeply restructuring various States, but are still "born together" in some tribal sense, and they all can reason and contribute to the same English language wikipedia), and maybe
+ Procedural (in that I used procedures to manipulate the list of kinds of memories by hand, and if I made procedural errors in composing it (like accidentally deleting a word and not noticing) then I might kinda have lied-by-accident due to my hands "doing data manipulation" wrongly), and definitely
+ Reconstructive (from many many inputs and my own work), and
+ Semantic (because words and means are pretty central here).
Imagine someone tried to go through an essay that they had written in the past and do a best-effort mark-up of ALL of the verbs with ALL of these, and then look for correlations?
Like I bet I bet "Procedural" and "Reconstructive" and "Semantic" go together a lot?
(And maybe that is close to one or more of the standard Inferential evidential markers?)
Likewise "Cultural" and "National" and "Institutional" and "Folk" also might go together a lot?
They they link up somewhat nicely with a standard evidentiality marker that often shows up which is "Reportative"!
So here is the sentence again, re-written, with some coherent evidential and modal tags attached, that is trying to simply and directly speak to the challenges:
"These granular adjectives mightvalidly-reportatively-inferably-modify the concept of memory: <list>."
One or more reportatives sorta convergently shows up in many language that have obligate evidential marking.
The thing I really want here is to indicate that "I'm mentally outsourcing a reconciliation of kinds of memories and kinds of evidential markers to the internet institution of Wikipedia via elaborate procedures".
Sometimes, some languages require that what not say "reportatively" but specifically drill down to distinguish between "Quotative" (where the speaker heard from someone who saw it and is being careful with attribution) vs "Hearsay" (which is what the listener of a Quotative or a Hearsay evidential claim should probably use when they relate the same fact again because now they are offering hearsay (at least if you think of each conversation as a court and each indicative utterance in a conversation as testimony in that court)).
Since Wikipedia does not allow original research, it is essentially ALL hearsay, I think? Maybe? And so maybe it'd be better to claim:
"These granular adjectives might-viaInternetHearsay-inferably-validly-modify the concept of memory: <list>."
For all I know (this is not an area of expertise for me at all) there could be a lot of other "subtypes of reportative evidential markers" in real existing human languages so that some language out there could say this easily???
I'm not sure if I should keep the original "can" or be happy about this final version's use of "might".
Also, "validly" snuck in there, and I'm not sure if I mean "scientificallyValidly" (tracking the scientific concept of validity) or "morallyValidly" (in the sense that I "might not be writing pure bullshit and so I might not deserve moral sanction")?
Dear god. What even is this comment! Why!? Why is it so hard?!
Where were we again?
I.A.2.a.i. Procedural commentary on evidentiality concomittment to the challenges of understanding the interogative mood.
Ahoy there John and David!
I'm not trying to write an essay (exactly), I'm writing a comment responding to you! <3
I think I don't trust language to make "adequate" sense. Also, I don't trust humans to "adequately" understand language. I don't trust common sense utterances to "adequately" capture anything in a clean and good and tolerably-final way.
The OP seems to say "yeah, this language stuff is safe to rely on to be basically complete" and I think I'm trying to say "no! that's not true! that's impossible!" because language is a mess. Everywhere you look it is wildly half-assed, and vast, and hard to even talk about, and hard to give examples of, and combintorially interacting with its own parts.
The digression just now into evidentiality was NOT something I worked on back in 2020, but it is illustrative of the sort of rabbit holes that one finds almost literally everywhere one looks, when working on "analytic meta linguistics" (or whatever these efforts could properly be called).
Remember when I said this at the outset?
"During covid" I got really interested in language, and was thinking of making a conlang that would be an intentional pidgin (and so very very simple in some sense) that was on the verge of creolizing but which would have small simple words with clear definitions that could be used to "ungramaticalize" everything that had been grammaticalized in some existing human language...
...this project to "lexicalize"-all-the-grammar(!) defeated me, and I want to digress here to briefly to talk about my defeat! <3
It would be kind of like Ithkuil, except, like... hopefully actually usable by real humans.
The reason I failed to create anything like a periodic table of grammar for a pidgin style conlang is because there are so many nooks and crannies! ...and they ALL SEEM TO INTERACT!
Maybe if I lived to be 200 years old, I could spend 100 of those years in a library, designing a language for children to really learn to speak as "a second toy language" that put them upstream of everything in every language? Maybe?
However, if I could live to 200 and spend 100 years on this, then probably so could all the other humans, and then... then languages would take until you were 30 to even speak properly, I suspect, and it would just loop around to not being possible for me again even despite living to 200?
I.B.1. Trying To Handle A Simple Case: Moods In Diving Handsigns
When I was working on this, I was sorta aiming to get something VERY SMALL at first because that's often the right way to make progress in software. Get test cases working inside of a framework.
So, it seemed reasonable to find "a REAL language" that people really need and use and so on, but something LESS than the full breadth of everything one can generally find being spoken in a tiny village on some island near Papua New Guinea?
So I went looking into scuba hand signs with the hope of translating a tiny and stupidly simple language and just successfully send THAT to some kind of Characteristia Universalis prototype to handle the "semantics of the pragmatics of modal operators".
The goal wasn't to handle tense, aspect, evidentiality, voice, etc in general. I suspect that diving handsigns don't even have any of that!
But it would maybe be some progress to be able to translate TOY languages into a prototype of an ultimate natural meta-language.
I.B.1.a Diving Handsigns Have Pragmatically Weird Mood (Because Avoiding Drowning Is The Most Important Thing) But They are Simple (Because It Is For Hobbyists With Shit In Their Mouth)
So the central juicy challenge was that in diving manuals, a lot of times their hand signs are implicitly in the imperative mood.
The dive leader's orders are strong, and mostly commands, by default.
The dive followers mostly give suggestions (unless they relate to safety, in which case they aren't supposed to use them except for really reals, because even if they use them wrongly, the dive leader has to end the dive if there's a chance of a risk of drowning based on what was probably communicated).
Then, in this linguistic situation, it turns out they just really pragmatically need stuff like this "question mark" handsign which marks the following or preceding handsign (or two) as having been in the interrogative mood:
And so I felt like I HAD to be able to translate the interrogative and imperative moods "cleanly" into something cleanly formal, even just for this "real toy language".
If I was going to match Frege's successes in a way that is impressive enough to justify happening in late 2020 (222 years after the 1892 publication of "Sense and Reference"), then... well... maybe I could use this to add one or two signs to "diving sign language" and actually generate technology from my research, as a proof that the research wasn't just a bunch of bullshit!
(Surely there has been progress here in philosophy in two centuries... right?!)
((As a fun pragmatic side note, there's a kind of interpretation here of this diving handsign where "it looks like a question mark" but also its kind of interesting how the index finger is "for pointing" and that pointing symbol is "broken or crooked" so even an alien might be able to understand that as "I can't point, but want to be able to point"?!? Is "broken indexicality" the heart of the interrogative mood somehow? If we wish to RETVRN TO NOVA ZEMBLA must we eschew this entire mood maybe??))
Like... the the imperative and interrogative moods are the default moods for a lot of diving handsigns!
You can't just ignore this and only think about the indicative mood all the time, like it was still the late 1800s... right? <3
So then... well... what about "the universal overarching framework" for this?
I.B.2. Trying To Find The Best Framework For Mood Leads To... Nenets?
So I paused without any concrete results on the diving stuff (because making Anki decks for that and trying it in a swimming pool would take forever and not give me a useful output) to think about where it was headed.
And now I wanted to know "what are all the Real Moods?"
And a hard thing here is (1) English doesn't have that many in its verbs and (2) linguists often only count the ones that show up in verb conjugation as "real" (for counting purposes), and (3) there's a terrible terrible problem in getting a MECE list of The Full List Of Real Moods from "all the languages".
Point three is non-obvious. The issue is, from language to language, they might lump and split the whole space of possible moods to mark differently so that one language might use "the mood the linguist decided to call The Irrealis Mood" only for telling stories with magic in them (but also they are animists and think the world is full of magic), and another language might use something a linguist calls "irrealis" for that AND ALSO other stuff like basic if/then logic!
So... I was thinking that maybe the thing to do would be to find the SINGLE language that, to the speakers of that language and linguists studying them, had the most DISTINCT moods with MECE marking.
This language turns out to be: Nenets. It has (I think) ~16 moods, marked inside the verb conjugation like it has been allowed to simmer and get super weird and barely understandable to outsiders for >1000 years, and marking mood is obligatory! <3
One can find academic reports on Nenets grammar like this:
In all types of data used in this study, the narrative mood is the most frequently used non-indicative mood marker. The narrative mood is mutually exclusive with any other mood markers. However, it co-occurs with tense markers, the future and the general past (preterite), as well as the habitual aspect. Combined with the future tense, it denotes past intention or necessity (Nikolaeva 2014: 93), and combined with the preterite marker, it encodes more remote past (Ljublinskaja & Malčukov 2007: 459–461). Most commonly, however, the narrative mood appears with no additional tense marking, denoting a past action or event.
So... so I think they have a "once upon a time" mood? Or maybe it is like how technical projects often make commitments at the beginning like "we're are only going to use technology X" and then this is arguably a mistake, and yet arguably everyone has to live with it forever, and so you tell the story about how "we decided to only, in the future, use technology X"... and that would be marked as "it was necessary in the deep past to use technology X going forward" with this "narrative mood" thingy that Nenets reportedly has? So you might just say something like "we narratively-use technology X" in that situation?
Maybe?
I.B.2.a. But Nenets Is Big, And Time Was Short, And Kripke Is Always Dogging Me, And I'm A Pragmatist At Heart
And NONE OF THIS got at what I actually think is often going on when, like at an animal level, where "I hereby allow you to eat" has a deep practical meaning!
The PDF version of Irina Nikolaeva's "A Grammar of Tundras Nenets" is 528 pages, but only pages 85 to 105 are directly about mood. Maybe it should be slogged through? (I tried slogging, and the slogging lead past so many rabbit holes!)
Like, I think maybe "allowing someone to eat" could be done by marking "eat" with the Jussive Mood (that Nenets has) and then if we're trying to unpack that into some kind of animalistic description of all of what is kinda going on the phrase "you jussive-eat" might mean something like:
"I will not sanction you with my socially recognized greater power if you try to eat, despite how I normally would sanction you, and so, game theoretically, it would be in your natural interest to eat like everyone usually wants to eat (since the world is Malthusian by default) but would normally be restrained from eating by fear of social sanction (since food sharing is a core loop in social mammals and eating in front of others without sharing will make enemies and disrupt the harmony of the group and so on), but it would be wise of you to do so urgently in this possibly short period of special dispensation, from me, who is the recognized controller of rightful and morally tolerated access to the precious resource that is food".
Now we could ask, is my make-believe Nenets phrase "you jussive-eat" similar to English "you should eat" or "you may eat" or "you can eat" or none-of-these-and-something-else or what?
Maybe English would really need something very complexly marked with status and pomp to really communicate it properly like "I allow you to eat, hereby, with this speech act"?
Or maybe I still don't have a good grasp on the underlying mood stuff and am fundamentally misunderstanding Nenets and this mood? It could be!
But then, also, compare my giant paragraph full of claims about status and hunger and predictable patterns of sanction with Kripke's modal logic which is full of clever representations of "necessity" and "possibility" in a way that he is often argued to have grounded in possible worlds.
"You must pay me for the food I'm selling you."
"There exist no possible worlds where it is possible for you to not pay me for the food I'm selling you."
The above are NOT the SAME! At all!
But maybe that's the strawman sketch... but every time I try to drop into the symbolic logic literature around Kripke I come out of it feeling like they are entirely missing the idea of like... orders and questions and statements, and how orders and questions and statements are different from each other and really important to what people use modals in language and practically unmentioned by the logicians :-(
I.B.2.b. Frege Dogs Me Less But Still... Really?
In the meantime, in much older analytic philosophy, Frege has this whole framework for taking the surface words as having senses in a really useful way, and this whole approach to language is really obsessed with "intensional contexts where that-quoting occurs" (because reference seems to work differently inside vs outside a "that-quoted-context"). Consider...
The subfield where people talk about "intensional language contexts" is very tiny, but with enough googling you can find people saying stuff about it like this:
As another example of an intensional context, reflectica allows us to correctly distinguish between de re and de dicto meanings of a sentence, see the Supplement to [6]. For example, the sentence Leo believes that some number is prime can mean either
𝖡𝖾𝗅𝗂𝖾𝗏𝖾𝗌¯(𝖫𝖾𝗈¯,∃x[𝖭𝗎𝗆𝖻𝖾𝗋¯(x)&𝖯𝗋𝗂𝗆𝖾¯(x)]) or
∃x(𝖭𝗎𝗆𝖻𝖾𝗋¯(x)&𝖡𝖾𝗅𝗂𝖾𝗏𝖾𝗌¯[𝖫𝖾𝗈¯,𝖯𝗋𝗂𝗆𝖾¯(x)]). Note that, since the symbol `𝖡𝖾𝗅𝗂𝖾𝗏𝖾𝗌¯’ is intensional in the second argument, the latter formula involves quantifying into an intensional context, which Quine thought is incoherent [7] (but reflectica allows to do such things coherently).
Sauce is: Mikhail Patrakeev's "Outline of a Self-Reflecting Theory"
((Oh yeah. Quine worked on this stuff too! <3))
So in mere English words we might try to spell out a Fregean approach like this...
"You must pay me for the food I'm selling you."
"It is (indicatively) true: I gave you food. Also please (imperatively): the sense of the phrase 'you pay me' should become true."
I think that's how Frege's stuff might work if we stretched it quite far? But it is really really fuzzy. It starts to connect a little tiny bit to the threat and counter threat of "real social life among humans" but Kripke's math seems somewhat newer and shinier and weirder.
Like... "reflectiva" is able to formally capture a way for the indicative mood to work in a safe and tidy domain like math despite the challenges of self reference and quoting and so on...
...but I have no idea whether or how reflectiva could bring nuance to questions, or commands, or laws, or stories-of-what-not-to-do, such that "all the real grammaticalized modes" could get any kind of non-broken treatment in reflectiva.
And in the meantime, in Spanish "poder" is the verb for "can" and cognate to modal auxiliary verbs like "could" (which rhymes with "would" and "should") and poder is FULL of emotions and metaphysics!
Where are the metaphysics here? Where is the magic? Where is the drama? "Shouldness" causes confusion that none of these theories seem to me to explain!
II. It Is As If Each Real Natural Language Is Almost Anti-Epistemic And So Languages Collectively ARE Anti-Epistemic?
Like WTF, man... WTF.
And that is why my attempt, during covid, to find a simple practical easy Characteristica universalis for kids, failed.
Toki pona is pretty cool, though <3
The Piraha can't count and many of them don't appear to be able to learn to count, not even as motivated adults, past a critical period (when (I've heard but haven't found a way to nail down for sure from clean eye witness reports) they have sometimes attended classes because they wish to be able to count the "money" they make from sex work, for example).
Are the Piraha in some meaningful sense "not fully human" due to environmental damage or are "counting numbers" not a natural abstraction or... or what?
On the other end of the spectrum, Ithkuil is a probably-impossible-for-humans-to-master conlang whose creator sorta tried to give it EVERY feature that has shown up in at least one human language that the creator of the language could find.
Does that mean that once an AI is fluent in Ithkuil (which surely will be possible soon, if it is not already) maybe the AI will turn around and see all humans sorta the way that we see the Piraha?
...
My current working model of the essential "details AND limits" of human mental existence puts a lot of practical weight and interest on valproic acid because of the paper "Valproate reopens critical-period learning of absolute pitch".
Also, it might be usable to cause us to intuitively understand (and fluently and cleanly institutionally wield, in social groups, during a political crisis) untranslatable 5!
Like, in a deep sense, I think that the "natural abstractions" line of research leads to math, both discovered, and undiscovered, especially math about economics and cooperation and agency, and it also will run into the limits of human plasticity in the face of "medicalized pedagogy".
And, as a heads up, there's a LOT of undiscovered math (probably infinitely much of it, based on Goedel's results) and a LOT of unperfected technology (that could probably change a human base model so much that the result crosses some lines of repugnance even despite being better at agency and social coordination).
...
Speaking of "the wisdom of repugnance".
In my experience, studying things where normies experience relatively unmediated disgust, I can often come up with pretty simply game theory to explain both (1) why the disgust would evolutionarily arise and also (2) why it would be "unskilled play within the game of being human in neo-modern times" to talk about it.
That is to say, I think "bringing up the wisdom of repugnance" is often a Straussian(?) strategy to point at coherent logic which, if explained, would cause even worse dogpiles than the current kerfuffle over JD Vance mentioning "cat ladies".
This leads me to two broad conclusions.
(1) The concepts of incentive compatible mechanism design and cooperative game theory in linguistics both suggest places to predictably find concepts that are missing from polite conversation that are deeply related to competition between adult humans who don't naturally experience storge (or other positive attachments) towards each other as social persons, and thus have no incentive to tell each other certain truths, and thus have no need for certain words or concepts, and thus those words don't exist in their language. (Notice: the word "storge" doesn't exist in English except as a loan word used by philosophers and theologians, but the taunt "mama's boy" does!)
(2) Maybe we should be working on "artificial storge" instead of a way to find "words that will cause AI to NOT act like a human who only has normal uses for normal human words"?
...
I've long collected "untranslatable words" and a fun "social one" is "nemawashi" which literally means "root work", and it started out as a gardening term meaning "to carefully loosen all the soil around the roots of a plant prior to transplanting it".
Then large companies in Japan (where the Plutocratic culture is wildly different than in the US) use nemawashi to mean something like "to go around and talk to the lowest status stakeholders about proposed process changes first, in relative confidence, so they can veto stupid ideas without threatening their own livelihood or publicly threatening the status of the managers above them, so hopefully they can tweak details of a plan before the managers synthesize various alternative plans into a reasonable way for the whole organization to improve its collective behavior towards greater Pareto efficiency"... or something?
The words I expect to not be able to find in ANY human culture are less wholesome than this.
English doesn't have "nemawashi" itself for... reasons... presumably? <3
...
Contrariwise... the word "bottom bitch" exists, which might go against my larger claim? Except in that case it involves a kind of stabilized multi-shot social "compatibility" between a pimp and a ho, that at least one of them might want to explain to third parties, so maybe it isn't a counter-example?
The only reason I know the word exists is that Chappelle had to explain what the word means, to indirectly explain why he stopped wanting to work on The Chappelle Show for Comedy Central.
Oh! Here's a thing you might try! Collect some "edge-case maybe-too-horrible-to-exist" words, and then check where they are in an embedding space, and then look for more words in that part of the space?
Maybe you'll be able to find-or-construct a "verbal Loab"?
(Ignoring the sense in which "Loab was discovered" and that discovery method is now part of her specific meaning in English... Loab, in content, seems to me to be a pure Jungian Vampire Mother without any attempt at redemption or social usefulness, but I didn't notice this for myself. A friend who got really into Lacan noticed it and I just think he might be right.)
And if you definitely cannot construct any "verbal Loab", then maybe that helps settle some "matters of theoretical fact" in the field of semantics? Maybe?
Ooh! Another thing you might try, based on this sort of thing, is to look for "steering vectors" where "The thing I'm trying to explain, in a nutshell, is..." completes (at low temperature) in very very long phrases? The longer the phrase required to "use up" a given vector, the more "socially circumlocutionary" the semantics might be? This method might be called "dowsing for verbal Loabs".
First things first:
My current working model of the essential "details AND limits" of human mental existence puts a lot of practical weight and interest on valproic acid because of the paper "Valproate reopens critical-period learning of absolute pitch".
This is fascinating and I would love to hear about anything else you know of a similar flavor.
As for the meat of the comment...
I think this comment didn't really get at the main claim from the post. The key distinction I think it's maybe missing is between:
So for instance, nemawashi is a concept which doesn't have a word in English, but it's of a type which is present in English - i.e. it's a pretty ordinary verb, works pretty much like other verbs, if imported into English it could be treated grammatically like a verb without any issues, etc.
I do like your hypothesis that there are concepts which humans motivatedly-avoid giving words to, but that hypothesis is largely orthogonal to the question of whether there are whole types of concepts which don't have corresponding word/phrase types, e.g. a concept which would require not just new words but whole new grammatical rules in order to use in language.
Ithkuil, on the other hand, sounds like it maybe could have some evidence of whole different types of concepts.
This is fascinating and I would love to hear about anything else you know of a similar flavor.
Caloric Vestibular Stimulation seems to be of a similar flavor, in case you haven't heard of it.
This is fascinating and I would love to hear about anything else you know of a similar flavor.
Seconded!!
Alright! I'm going to try to stick to "biology flavored responses" and "big picture stuff" here, maybe? And see if something conversational happens? <3
(I attempted several responses in the last few days and each sketch turned into a sprawling messes that became a "parallel comment". Links and summaries at the bottom.)
The thing that I think unifies these two attempts at comments is a strong hunch that "human language itself is on the borderland of being anti-epistemic".
Like... like I think humans evolved. I think we are animals. I think we individually grope towards learning the language around us and always fail. We never "get to 100%". I think we're facing a "streams of invective" situation by default.
Don: “Up until the age of 25, I believed that ‘invective’ was a synonym for 'urine’.”
BBC: “Why ever would you have thought that?”
Don: “During my childhood, I read many of the Edgar Rice Burroughs 'Tarzan’ stories, and in those books, whenever a lion wandered into a clearing, the monkeys would leap into the trees and 'cast streams of invective upon the lion’s head.’”
BBC: long pause “But, surely sir, you now know the meaning of the word.”
Don: “Yes, but I do wonder under what other misapprehensions I continue to labour.”
I think prairie dogs have some kind of chord-based chirp system that works like human natural language noun phrases do because noun-phrases are convergently useful. And they are flexible-and-learned enough for them to have regional dialects.
I think elephants have personal names to help them manage moral issues and bad-actor-detection that arise in their fission-fusion social systems, roughly as humans do, because personal names are convergently useful for managing reputation and tracking loyalty stuff in very high K family systems.
I think humans evolved under Malthusian conditions and that there's lots of cannibalism in our history and that we use social instincts to manage groups that manage food shortages (who semi-reliably go to war when hungry). If you're not tracking such latent conflict somehow then you're missing something big.
I think human languages evolve ON TOP of human speech capacities, and I follow McWhorter in thinking that some languages are objectively easy (because of being learned by many as a second language (for trade or slavery or due to migration away from the horrors of history or whatever)) and others are objectively hard (because of isolation and due to languages naturally becoming more difficult over time, after a disruption-caused-simplification).
Like it isn't just that we never 100% learn our own language. It is also that adults make up new stuff a lot, and it catches on, and it becomes default, and the accretion of innovation only stabilizes when humans hit their teens and refuse to learn "the new and/or weird shit" of "the older generation".
Maybe there can be language super-geniuses who can learn "all the languages" very easily and fast, but language are defined, in a deep sense, by a sort of "20th percentile of linguistic competence performance" among people who everyone wants to be understood by.
And the 20th percentile "ain't got the time" to learn 100% of their OWN language.
But also: the 90th percentile is not that much better! There's a ground floor where human beings who can't speak "aren't actually people" and they're weeded out, just like the fetuses with 5 or 3 heart chambers are weeded out, and the humans who'd grow to be 2 feet tall or 12 feet tall die pretty fast, and so on.
On the "language instincts" question, I think: probably yes? If Neanderthals spoke, it was probably with a very high pitch, but they had Sapiens-like FOXP2 I think? But even in modern times there are probably non-zero alleles to help recognize tones in regions where tonal languages are common.
Tracking McWhorter again, there are quite a few languages spoken in mountain villages or tiny islands with maybe 500 speakers (and the village IQ is going to be pretty stable, and outliers don't matter much), where children simply can't speak properly until they are maybe 12.
(This isn't something McWhorter talks about at all, but usually puberty kicks in, and teens refuse to learn any more arbitrary bullshit... but also accents tend to freeze around age 12 (especially in boys, maybe?) which might have something to do with shibboleths and "immutable sides" in tribal wars?)
Those languages where 11 year olds are just barely fluent are at the limit of isolated learnable complexity.
For an example of a seriously tricky language, my understanding (not something I can cite, just gossip from having friends in Northern Wisconsin and a Chippewa chromosome or two) is that in Anishinaabemowin they are kinda maybe giving up on retaining all the conjugations and irregularities that only show up very much in philosophic or theological or political discussions by adults, even as they do their best to retain as much as they can in tribal schools that also use English (for economic rather than cultural reasons)?
So there are still Ojibwe grandparents who can "talk fancy", but the language might be simplifying because it somewhat overshot the limits of modern learnability!
Then there's languages like nearly all the famous ones including English, where almost everyone masters it by age 7 or 8 or maybe 9 for Russian (which is "one of the famous ones" that might have kept more of the "weird decorative shit" that presumably existed in Indo-European)?
...and we kinda know which features in these "easy well known languages" are hard based on which features become "nearly universal" last. For example, rhotics arrive late for many kids in America (with quite a few kindergartners missing an "R" that the teacher talks to their parents about, and maybe they go to speech therapy) but which are also just missing in many dialects, like the classic accents of Boston, New York City, and London... because "curling your tongue back for that R sound" is just kinda objectively difficult.
In my comment laying out a hypothetical language like "Lyapunese" all the reasons that it would never be a real language don't relate to philosophy, or ethics, or ontics, or epistemology, but to language pragmatics. Chaos theory is important, and not in language, and its the fault of humans having short lives (and being generally shit at math because of nearly zero selective pressure on being good at it), I think?
In my comment talking about the layers and layers of difficulty in trying (and failing!) to invent modal auxialiary verbs for all the moods one finds in Nenets, I personally felt like I was running up against the wall of my own ability to learn enough about "those objects over there (ie weird mood stuff in other languages and even weird mood stuff in my own)" to grok the things they took for granted enough to go meta on each thing and become able to wield them as familiar tools that I could put onto some kind of proper formal (mathematical) footing. I suspect that if it were easy for an adult to learn that stuff, I think the language itself would have gotten more complex, and for this reason the task was hard in the way that finding mispricings in a market is hard.
Humans simply aren't that smart, when it comes to serial thinking. Almost all of our intelligence is cached.
I'm not sure "words point to clusters in thingspace" is true.
Consider: two people with no common language working on a task together that requires coordination.
For example, imagine two laborers are trying to move a heavy object and they can only do so by both lifting at the same time, taking a few steps, and then setting the object down again.
If two people both speak English, they might solve this problem by saying "one, two, three, lift!" But even if one speaks English and one speaks Chinese, they will still very quickly figure out how to do it.
In this case, it seems like it can't be the case that there is a "word" that is pointing to a object in "thingspace" because the two people don't have any words in common.
Instead, they will settle on some ad-hoc protocol that allows them to figure out when the other person needs to drop the object, or when they are ready to lift again.
I think language is just this on a massive scale. Humans needed to coordinate, so they agreed on a set of signals that allow them to describe the movements they want other humans to do.
If we think of language not as a "map of the world out there" but rather a stream of bits that allows two minds to coordinate, then I don't see why human semantics would have data-structures for different elements of fluid dynamics.
In particular, if you ask me to draw a picture of a cloud, it comes out looking like this:
Which I'm pretty sure has stripped all of the fluid-dynamics out of the cloud.
Another example of this is the fact that most knowledge at companies is procedural and cannot in any sense be written down. This is why when key employees leave, or companies try to duplicate the exact same process at a different location, they often find that key steps were missing or left out.
I do think "words point to clusters in mindspace" is somewhat valid. Going back to the "1,2,3, lift!" example, the reason this works is because it allows me to meaningfully point to something in your mind. And if we agree that mindspace maps onto "thingspace" (which it must, since people are able to operate successfully in in thingspace). Then there is probably some indirect mapping "words" -> "mindspace" -> "thingspace". But I don't see any reason why this mapping would need sufficient richness to include all of fluid dynamics. In the "1,2,3, lift!' example, only 1 bit needs to be communicated (start lifting/stop lifting), not a complete map of the object being lifted. Words are also incredibly situational. If I say "1,2,3, lift!" out of the blue, you're going to have no idea what I mean.
Once you make the switch from "words are a method to describe particular things in the world" to "words are a tool to access and edit the mental states of other human beings", it also makes a lot of other phenoma clear. For example, why do people like to argue about definitions. If words are merely a pointer to facts about reality, we should all be able to agree what they are pointing to. But if words are a tool to affect other people's mental space, then any attempt to redefine a word is also an attempt to control the other person. This also explains why language reform is one of the key tools commonly used by revolutionaries and social movements, and why tabooing a word is a key project of social reformers.
That said, if those sorts of concepts are natural in our world, then it’s kinda weird that human minds weren’t already evolved to leverage them.
A counter possibility to this that comes to mind:
There might be concepts that is natural in our world, but which are only useful for a mind with much more working memory, or other compute recourses than the human mind.
If weather simulations use concepts that are confusing and un-intuitive for most humans, this would be evidence for something like this. Weather is something that we encounter a lot, and is important for humans, especially historically. If we haven't developed some natural weather concept, it's not for lack of exposure or lack of selection pressure, but for some other reason. That other reason could be that we're not smart enough to use the concept.
The human mind is a sufficiently general simulator of the world, and fidelitous representations of the world “naturally” decompose into few enough basic types of data structures, that human minds operate all of the data structure types which naturally (efficiently, sufficiently accurately, …) are “found” in the world. When we use language to talk about the world, we are pointing words at these (convergent!) internal data structures. Maybe we don’t have words for certain instances of these data structures, but in principle we can make new words whenever this comes up; we don't need whole new types of structures.
Seems interesting, and plausible.
Makes me wonder where we'd go looking to try to disprove the hypothesis. My first thought is quantum mechanics. I'm no quantum physicist, so I don't think I know enough to say, but I certainly get the feeling when trying to read pop sci descriptions of quantum phenomena that I somehow lack the right kind of concepts to really get what's happening on a quantum scale. Particle? Wave? Wrong intuitions entirely!?
Which makes sense from the point of view that details of quantum level particle interactions are well outside anything that a human experienced in our evolutionary history. Similarly, extreme astrological phenomena like black holes and dark energy. Scientists have coined terms for the things their math and observations seem to be pointing to, but I suspect these terms don't 'fit' (point at?) nearly as well as the term dog 'fits' (points at) the true universe-pattern of dogs as experienced by humans.
Not sure how to quantify this 'fitness' of word to concept, and of concept to actual manifestation.
Another way this might fail is if fluid dynamics is too complex/difficult for you to constructively argue that your semantics are useful in fluid dynamics. As an analogy, if you wanted to show that your semantics were useful for proving fermat's last theorem, you would likely fail because you simply didn't apply enough power to the problem, and I think you may fail that way in fluid dynamics.
I'd expect that if the natural-abstractions theory gets to the point where it's theoretically applicable to fluid dynamics, then demonstrating said applicability would just be a matter of devoting some amount of raw compute to the task; it wouldn't be bottlenecked on human cognitive resources. You'd be able to do things like setting up a large-scale fluid simulation, pointing the pragmascope at it, and seeing it derive natural abstractions that match the abstractions human scientists and engineers derived for modeling fluids. And in the case of fluids specifically, I expect you wouldn't need that much compute.
(Pure mathematical domains might end up a different matter. Roughly speaking, because of the vast gulf of computational complexity between solving some problems approximately (BPP) vs. exactly. "Deriving approximately-correct abstractions for fluids" maps to the former, "deriving exact mathematical abstractions" to the latter.)
Probably many people who are into Eastern spiritual woo would make that claim. Mostly, I expect such woo-folk would be confused about what “pointing to a concept” normally is and how it’s supposed to work: the fact that the internal concept of a dog consists of mostly nonlinguistic stuff does not mean that the word “dog” fails to point at it.
On my model, koans and the like are trying to encourage a particular type of realization or insight. I'm not sure whether the act of grokking an insight counts as a "concept", but it can be hard to clearly describe an insight in a way that actually causes it? But that's mostly deficiency in vocab plus the fact that you're trying to explain a (particular instance of a) thing to someone who has never witnessed it.
This post is styled after conversations we’ve had in the course of our research, put together in a way that hopefully highlights a bunch of relatively recent and (ironically) hard-to-articulate ideas around natural abstractions.
John: So we’ve been working a bit on semantics, and also separately on fluid mechanics. Our main goal for both of them is to figure out more of the higher-level natural abstract data structures. But I’m concerned that the two threads haven’t been informing each other as much as they should.
David: Okay…what do you mean by “as much as they should”? I mean, there’s the foundational natural latent framework, and that’s been useful for our thinking on both semantics and fluid mechanics. But beyond that, concretely, in what ways do (should?) semantics and fluid mechanics inform each other?
John: We should see the same types of higher-level data structures across both - e.g. the “geometry + trajectory” natural latents we used in the semantics post should, insofar as the post correctly captures the relevant concepts, generalize to recognizable “objects” in a fluid flow, like eddies (modulo adjustments for nonrigid objects).
David: Sure, I did think it was intuitive to think along those lines as a model for eddies in fluid flow. But in general, why expect to see the same types of data structures for semantics and fluid flow? Why not expect various phenomena in fluid flow to be more suited to representation in some data structures which aren’t the exact same type as those used for the referrents of human words?
John: Specifically, I claim that the types of high-level data structures which are natural for fluid flow should be a subset of the types needed for semantics. If there’s a type of high-level data structure which is natural for fluid flow, but doesn’t match any of the semantic types (noun, verb, adjective, short phrases constructed from those, etc), then that pretty directly disproves at least one version of the natural abstraction hypothesis (and it’s a version which I currently think is probably true).
David: Woah, hold up, that sounds like a very different form of the natural abstraction hypothesis than our audience has heard before! It almost sounds like you’re saying that there are no “non-linguistic concepts”. But I know you actually think that much/most of human cognition routes through “non-linguistic concepts”.
John: Ok, there’s a couple different subtleties here.
First: there’s the distinction between a word or phrase or sentence vs the concept(s) to which it points. Like, the word “dog” evokes this whole concept in your head, this whole “data structure” so to speak, and that data structure is not itself linguistic. It involves visual concepts, probably some unnamed concepts, things which your “inner simulator” can use, etc. Usually when I say that “most human concepts/cognition are not linguistic”, that’s the main thing I’m pointing to.
Second: there’s concepts for which we don’t yet have names, but could assign names to. One easy way to find examples is to look for words in other languages which don’t have any equivalent in our language. The key point about those concepts is that they’re still the same “types of concepts” which we normally assign words to, i.e. they’re still nouns or adjectives or verbs or…, we just don’t happen to have given them names.
Now with both of those subtleties highlighted, I’ll once again try to state the claim: roughly speaking, all of the concepts used internally by humans fall into one of a few different “types”, and we have standard ways of describing each of those types of concept with words (again, think nouns, verbs, etc, but also think of the referents of short phrases you can construct from those blocks, like “dog fur” or “the sensation of heat on my toes”). And then one version of the Natural Abstraction Hypothesis would say: those types form a complete typology of the data structures which are natural in our world.
David: Alright, let me have a crack at it. New N.A.H. just dropped: The human mind is a sufficiently general simulator of the world, and fidelitous representations of the world “naturally” decompose into few enough basic types of data structures, that human minds operate all of the data structure types which naturally (efficiently, sufficiently accurately, …) are “found” in the world. When we use language to talk about the world, we are pointing words at these (convergent!) internal data structures. Maybe we don’t have words for certain instances of these data structures, but in principle we can make new words whenever this comes up; we don't need whole new types of structures.
I have some other issues to bring up, but first: Is this version of the N.A.H. actually true? Do humans actually wield the full set of basic data structures natural for modeling the whole world?
John: Yeah, so that’s a way in which this hypothesis could fail (which, to be clear, I don’t actually expect to be an issue): there could be whole new types of natural concepts which are alien to human minds. In principle, we could discover and analyze those types mathematically, and subjectively they’d be a real mindfuck.
That said, if those sorts of concepts are natural in our world, then it’s kinda weird that human minds weren’t already evolved to leverage them. Of course it’s hard to tell for sure, without some pretty powerful mathematical tools, but I think the evolutionary pressure argument should make us lean against. (Of course a counterargument could be that whole new concept-types have become natural, or will become natural, as a result of major changes in our environment - like e.g. humans or AI taking over the world.)
David: Second genre of objections which seem obvious: Part of the claim here is, “The internal data structures which language can invoke form a set that includes all the natural data-structure types useful/efficient/accurate for representing the world.” But how do we know whether or not our language is so deficient that a fully fleshed out Interoperable Semantics of human languages still has huge blind spots? What if we don’t yet know how to talk about many of the concepts in human cognition, even given the hypothesis that human minds contain all the basic structures relevant for modeling the world? What if nouns, adjectives, verbs, etc.. are an impoverished set of semantic types?
John: That’s the second way the hypothesis could fail: maybe humans already use concepts internally which are totally un-pointable-to using language (or at least anything like current language). Probably many people who are into Eastern spiritual woo would make that claim. Mostly, I expect such woo-folk would be confused about what “pointing to a concept” normally is and how it’s supposed to work: the fact that the internal concept of a dog consists of mostly nonlinguistic stuff does not mean that the word “dog” fails to point at it. And again here, I think there’s a selection pressure argument: a lot of effort by a lot of people, along with a lot of memetic pressure, has gone into trying to linguistically point to humans’ internal concepts.
Suppose there is a whole type of concept which nobody has figured out how to point at (talk about.) Then, either:
So basically I claim that human internal concepts are natural and we have spent enough effort as a species trying to talk about them that we’ve probably nailed down pointers to all the basic types.
David: And if human internal concepts are importantly unnatural, well then the N.A.H. fails. Sounds right.