This post was rejected for the following reason(s):
No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).
LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
Epistemic Status: Speculative but technically grounded.
The UniversalKnowledgeTensor is a prototype of a Trustworthy Collective Intelligence. The main algorithms have been prototyped. But as a new paradigm I am seeking critical feedback on limitations addressing Civilizational Epistemics and issues with my attempts to explain it.
You have important knowledge that you feel needs to be shared to make the world a better place. This could be about AI alignment, Climate Change, economics or global health. But when you try to communicate by speaking or posting, your message doesn’t seem get through. You may reach a small subset of your audience or we end up talking past each other. But to solve these challenging global issues, we need to reach a significant percentage of the human population.
You may blame your failure to communicate on ignorance, apathy or ideology. In this post, I will be making the claim that this task is impossible to achieve with human languages. But if we changed the substrate of knowledge this task becomes trivial.
Reasons
1. Limited Attention Span: Human thinking is fundamentally limited and must be rationed to many other cultural, economic and political issues. Herbert Simon claimed that our rationality is more constrained by our finite capacity to absorb information than the amount of available information in society (Simon 1957)[1].
Human Languages are at the root of the problem since every sentence must be processed sequentially and requires a non-Zero amount of effort to process it. In our over-saturated information environment, every idea must compete for our limited attention. Kahneman claims that since our attention is a limited resource, what captures it isn’t always what deserves it (Kahneman 2011)[2].
2. The Understanding Gap: But let’s assume someone heard what you said. There is no guarantee that they understood it the way you intended. If they don’t have shared background knowledge, they may not incorporate your claims into their mental model.
Chomsky (1965)[3] and Lakoff (1987)[4] claimed that understanding language requires mapping symbols onto existing structures. But when these differ, communication can just descend into noise. Simple and easy to understand narratives easily fit these mental models and require little effort to process. But technically accurate explanations require a significant amount of cognitive effort. Unsurprisingly, clear falsehoods often spread faster and stick better than complex truths.
3. The Impression Filter: Given our limited attention span, our minds must now filter all of the exponential amount of knowledge our civilization is producing. But our brains never evolved to properly deal with this task. As as result ideas that are shocking or novel will cause more of an impression and “puncture” our filter. This is why Sensationalism dominates our media because it exploits our mental shortcuts that we evolved for survival but not truth.
Slovic (1987)[5] claims that emotional salience, or the “affect heuristic,” often overrides statistical reasoning, Tversky and Kahneman (1974) [6] show systematic biases in judgment under uncertainty. Unsurprisingly, communication that is optimized for emotional impact consistently outperforms communication optimized for accuracy.
4. The Retention Problem: Even when is properly received and understood, its retention fades in the receiver’s mind. Information must be continually reinforced to remain active but this approach is very inefficient.
In our current fast-moving information environment, this problem is severe regardless of the idea’s importance or truth. Huberman(2008)[7] showed that “Out of sight, out of mind” is a fundamental property of human memory, and not just laziness .
5. The Trust Deficit: Even when what you say is understood and remembered, you may not be believed. Trust depends heavily on perceived motivation, integrity, and identity. For example: AI researchers warning of existential risk are distrusted because of “self-interest,” and climate scientists are accused of political bias. Your audience will filter messages through social heuristics rather than epistemic evaluation. Languages provide no built-in integrity mechanisms for trust. Hardin(2002)[8] and Cialdini(2007)[9] show that trust is socially constructed and therefore is socially fragile .
6. Epistemic Fragmentation: But just because your knowledge made it to the Receiver’s mind, doesn’t mean it will influence thinking or action. For example: people may know facts about health, economics, or climate but rarely reason with them.
This is not a shortage of information but a failure of epistemic integration. Stanovich and West (2000)[10] show that rational competence does not reliably translate into rational performance. People can possess knowledge yet fail to apply it effectively.
7. The Action Problem : Knowing the situation is bad is not enough. The value of knowledge is to guide the best corrective action. Experts have domain-specific solutions in health, economics, or climate policy, but effective action requires integrating all their perspectives. But these perspectives lack a common framework for comparison.
Why These Problems Are Inherent
These problems are not just user mistakes. They are baked into the foundation of language processing. Human language evolved for social coordination and not for the truth. Pinker( 2007)[11] and Sperber & Wilson(1986)[12] show that human language is great for persuasion, expressiveness, and ambiguity but it is lousy for precision and rigorous reasoning.
When one person doesn’t listen, that’s an epistemic failure. When our entire society doesn’t listen to each other, that’s a civilizational epistemic failure. Narratives have been used as the backbone of epistemics since they are very expressive and can easily communicate stories and metrics. But they are also vulnerable to the issues listed above. These problems disappear with KnowledgeTensors. Although they are not as expressive as narratives, they can communicate quantitative knowledge optimally in a civilization.
The solution comes from structure, not persuasion. Tetlock & Gardner (2015)[13] show in their forecasting research that ensemble models reliably outperform even the best individual forecasters because aggregating independent judgments cancels out random errors and amplifies shared signal. KnowledgeTensors are the infrastructure that can scale this from small forecasting groups to a civilization-wide collective intelligence.
The UniversalKnowledgeTensor
At its simplest level, a KnowledgeTensor can be thought of as a Spreadsheet that has been scaled from 2 Dimensions to a Varying Number of Dimensions.
The KnowledgeCell represents the atomic unit of knowledge. It is a uniquely addressable, integrity-verified unit that encodes a single claim, metric, or causal dependency.
Aggregating many such cells yields a KnowledgeTensor. This is a structured, multidimensional representation of knowledge within a domain by an expert. Relationships now emerge based on explicit input coordinates rather than verbal inference.
Integrating many experts’ KnowledgeTensors across many domains produces the UniversalKnowledgeTensor. It is a civilization-scale graph of quantified and verifiable claims. Knowledge is now routed based on relevance. The saliency of the narrative does not effect its routing. This is the best substrate so that we can build a Civilizational Collective Trustworthy Intelligence[14].
This knowledge isn’t for show. Every piece of knowledge in a Civilization now feeds into is comprehensive decision-making framework ( called LifeScore Simulations) which can be used by Individuals, Organizations and Society. Metrics are the best mechanism for Effective Altruism, and this framework solves many classes of problems in the communication of metrics.
Influencing collective action required sensationalism, simple models and charisma. Under a UniversalKnowledgeTensor, an expert contributes simply by publishing their KnowledgeTensor to the blockchain. That’s it! No debates, no audience-pleasing, no spectacle!
This is how it fixes the problems above:
1. Attention Span is solved by Computability and Routing : Since the UniversalKnowledgeTensor is a civilization-scale graph, if a single expert determines your knowledge is relevant, than it will be incorporated into the collective intelligence. When a user pulls on one node of the graph, it recursively pulls on the other nodes it is connected to. This routing is based on the multi-dimensional coordinate system. The fact that each KnowledgeCell is computable, means that Attention Span is now infinite rather than a limited biological resource.
2. Understanding is solved by Computability: In language, understanding requires shared background models. But since KnowledgeCells are computable, the user can “process” this knowledge with the same fidelity as the expert who created it. This removes the requirement for shared natural language comprehension.
3. Impression is solved by Neutral Salience: All KnowledgeTensors are of neutral salience. It’s routing mechanism depends solely on logical and causal linkage. You cannot make a KnowledgeTensor more visible by being adding sensationalism and click-baityness to it. All KnowledgeTensors are incorporated proportional to their weighting and no amount of sensationalism can change their weighting.
4. Retention is solved by Epistemic Persistence: Knowledge encoded into KnowledgeCells persists in the UniversalKnowledgeTensor until updated. The recall of KnowledgeCells is now completely independent of any human mind being aware of them. Unlike processing of human languages, knowledge no longer decays as attention wanes.
5. Trust is solved by Built-in Integrity Mechanisms : In the UniversalKnowledgeTensor, trust is achieved via epistemic integrity mechanisms. This is a very technical topic and it would be like explaining website security to someone who doesn’t know what a website is. But the rough intuition is that rough idea is that KnowledgeTensors are very easy to integrate through a weighted-average scheme. This reduces corruption and increases the overall signal-to-noise ratio.
6. Epistemic Fragmentation is solved by the Civilizational Knowledge Graph: Once all the knowledge is connected via the UniversalKnowledgeTensor, Epistemic Fragmentation is now impossible. All knowledge will be processed if it relevance since when you pull on one node (KnowledgeCell) in the graph it will pull on all the relevant input nodes(KnowledgeCells).
7. The Action Problem is solved by the Decision-Making Framework:. The entirety of the UniversalKnowledgeTensor is processed to assess the users current and predicted state of the world. Then the UniversalKnowledgeTensor will identify all interventions ( actions) and assess their impact. By encoding their knowledge in KnowledgeTensors, experts can perfectly communicate all consequences for every domain of knowledge for every intervention.
LLMs vs. the UniversalKnowledgeTensor
Large language models are now the most widely used “supermind.” Therefore the introduction of the UniversalKnowledgeTensor as a supermind requires a comparison to LLMs to identify its strengths and weaknesses.
Dimension
LLMs
UniversalKnowledgeTensor
Scope of Knowledge
All. Everything from doing your homework to coding
Quantitative, decision-relevant knowledge (policies, climate, health, economics) with the highest-importance knowledge encoded first
Epistemic Reliability
Medium: outputs vary by prompt, training data, and hallucination risk.
Maximal Verifiability: every claim is anchored, quantified, and traceable.
Trustworthiness
Low: relies on models we do not understand and cannot predict.
Major leap forward: trust is structural, not social. Powerful redundant integrity mechanisms make it impossible to corrupt.
Alignment Properties
Weak: half of the posts on LessWrong are about Alignment
Large improvement: alignment is baked in. The civilization wide graph of knowledge will never turn on humanity.
Energy Requirements
Massive: training + inference.
Minimal: Quantitative formulas are not computationally intensive
Human Labor Requirements
Low: trained on massive datasets with minimal human labor
Medium: structured knowledge entry requires upfront cognitive labor, which improves trust and reliability.
Privacy Model
Essentially zero: your data is sent to LLM provider
Highest Possible: data never leaves user’s system.
Resistance to Enshittification
Low: degradation is inevitable. High operating costs will force monetization driven compromises.
Major Improvement: changes to the protocol are constrained by the community. Integrity, transparency, and provenance cannot quietly degrade.
Resistance to Censorship Pressure
Low: outputs can be suppressed or shaped.
Highest Possible: Powerful integrity mechanisms make censorship impossible.
Platform Survivability
Low: authorities can disable if platform threatens regime
High: knowledge persists on the blockchain independent of any single organization or regime.
The UniversalKnowledgeTensor is not a total replacement of LLMs. The main use of the UniversalKnowledgeTensor is for individual, organizational and societal assessment of current state and optimal intervention. LLMs will still be used to write emotional essays, do your homework and instruct you on how to decorate a cake.
I Still Don’t See It! How Do “Spreadsheets” Revolutionize Civilizational Epistemics?
Based on first impressions(pun intended), a spreadsheet does not have any special properties that effects Epistemics. They suffer from the same epistemic fragmentation and integrity issues as everything else.
A single KnowledgeTensor will not revolutionize anything. This is similar to how a single person’s narrative will not change anything. The revolution occurs by merging all our minds into one supermind. Below are the features that make KnowledgeTensors the simplest substrate that can support this:
1. Expressiveness: At first glance, a spreadsheet seems to lack the expressiveness a narrative has. It doesn’t capture emotions , stories, or experiences. But optimal decision making doesn’t depend on narratives. It depends on metrics. Narratives can mislead in many ways( ex: Overly Simple Models, Selective Beleifs, etc...).
For example, a story about a family losing their home in a hurricane is emotionally powerful, but what you need for optimal decision-making is statistics: the number of people displaced, the severity of infrastructure damage, the projected cost of repairs, the forecasted risk of future storms,, etc...
2. Simplicity: Any overly complicated knowledge substrate will fail when normal people try to use it. Since everyone knows what a spreadsheet is, they should not have problems understanding the difference of a KnowledgeTensor.
3. Multidimensional Addressing: A spreadsheet is stuck in 2D. The key innovation behind KnowledgeTensors is that it assigns each unit of knowledge it’s own well defined coordinate in multidimensional space. This is what makes all other properties possible.
4. Integration via Ensemble Epistemics : If we both encoded our knowledge into differing narratives, it would be difficult and error-prone to merge. But if we both encoded our knowledge into a spreadsheet that used the same coordinates, merging them would be easy. Therefore it is pretty obvious that integration of KnowledgeTensors in a Multidimensional Coordinate System is also easy
5. Knowledge Dependence and the Civilizational Graph : Each KnowledgeCell encodes what it depends on and what it predicts. When many people encode knowledge this way, you automatically get a civilizational knowledge graph where nodes are KnowledgeCells and edges are dependencies. This is not to be confused with traditional Knowledge Graphs that encode semantics. This a graph of metrics and their dependencies. Querying any node propagates through its dependencies, pulling in everyone’s contributed models.
6. Computability produces an Infinite Attention Span : A KnowledgeCell is computable unlike a narrative. The user can run an expert’s knowledge with the same fidelity as the expert who encoded it. Since computations are cheap, we can easilsy run billions of KnowledgeCells. This effectively gives civilization an infinite attention span by perfectly computing everyone’s model.
7. Integrity : Every design decision in the construction of the UniversalKnowledgeTensor was made to ensure MAXIMUM INTEGRITY. Usability and expressiveness are distant second priorities. This platform of maximum integrity has the potential of fixing the breakdown of trust in society. The UniversalKnowledgeTensor cannot represent all knowledge in a society. But whatever knowledge it does represent, can be trusted due to the integrity mechanisms available.
Herbert A. Simon, Models of Man: Social and Rational (Mathematical Essays on Rational Human Behavior in a Social Setting), John Wiley & Sons, New York, 1957
Huberman, Bernardo A., Lada A. Adamic, and Joshua R. Glance. “Social Networks and Information Diffusion.” Physica A: Statistical Mechanics and its Applications, 2008,
Stanovich, Keith E., and Richard F. West. “Individual Differences in Reasoning: Implications for the Rationality Debate?” Behavioral and Brain Sciences, 23(5), 2000.