In recent decades, we have been experiencing an era of technological breakthroughs that are changing the familiar boundaries not only in science and business, but also in philosophy itself. The article I am referring to raises questions about the reliability of knowledge in the age of information technology and artificial intelligence, and calls for an attempt to revise epistemology beyond the scope of classical theories.
- The problem of reliability as a central epistemological challenge — in the era of generative neural networks, it is becoming virtually impossible to distinguish truth from falsehood, as “falsified” content becomes convincing in itself.
- The inevitability of AI hallucinations and confabulations as a theoretical limit — Michael Levine's fundamental theorem proves that any language model will inevitably generate false facts, and this cannot be completely eliminated.
- The risk of neural network collapse and human knowledge degradation — through mutual training of networks on their hallucinations and human consumption of this distorted data, the degradation of both AI and human intelligence is inevitable.
- Criticism of the confirmatory approach (verification) and the need to shift the focus to falsifiability — the traditional approach in fact-checking systems of “finding evidence in favor of something” should be replaced by the search for refutations (Popper's principle of falsifiability).
- Decentralization and cryptographically secure systems as a prerequisite for objectivity — is one of the key theses: verification of authenticity must be independent of centralized administration, hired experts, corporations, states, and biased algorithms.
- Algorithms based on consistency analysis — new technical models are needed where information “blocks” (facts) are interconnected and contradictions between them reduce, while strong consistent connections increase reliability.
- Reputational responsibility as an epistemic mechanism — people who make statements are automatically held accountable through their reputation: the more reliable the author is in “standing by their statements” — the greater their contribution to reliability.
- Epistemic competition as a game (game theory) between scientific knowledge and fakes — all models and versions compete with different points of view, and victory goes to those who are resistant to refutation and contradictions, not to those who simply accumulate statements and publications.
- Constructive alternativism as an ontological and epistemological position — there are many hypotheses, none of which can be “absolutely true”, but the value of hypotheses is determined by their heuristic potential and resistance to refutation.
- Collective intelligence as the ontological and epistemic antithesis of AI monopoly — only the synthesis of human collective intelligence and algorithmic systems built on the criteria of falsifiability and reputation can guarantee sustainable knowledge and trust in the digital cyber economy.
Let us consider these fundamental changes through the prism of the philosophy of science and epistemology, drawing on the works of thinkers such as Karl Popper, Thomas Kuhn, Paul Feyerabend, and Imre Lakatos.
1. Popper: falsifiability as the main criterion of scientificity
Karl Popper argued that the only criterion by which the scientific nature of a theory can be judged is its falsifiability — its ability to be refuted. Artificial intelligence and neural networks, especially generative models, pose new challenges to traditional epistemology. In particular, they generate “hallucinations” — false but very plausible statements that are difficult to distinguish from the truth. This necessitates a rethinking of the concept of falsifiability, as many statements generated by AI cannot be easily refuted by standard scientific methods.
Instead of confirming a theory with facts, as is the case in traditional science, we are faced with a situation where we need to look for refutations and errors in data and algorithms. This fundamentally changes approaches to scientific verification.
- Direct application of Popper's principle: knowledge is not confirmation, but the ability to withstand refutation.
- The system of “knowledge hypergraphs” with contradiction checking is an algorithmic form of Popperian falsification.
- AI hallucinations are an example of unverifiability — if we accept statements without attempting to refute them, we lose science.
2. Kuhn: paradigms and scientific revolutions in the digital age
Thomas Kuhn introduced the concept of scientific paradigms, which define general approaches and methods of work in science. Each paradigm begins to prevail, then encounters anomalies that cannot be explained within its framework, and eventually a paradigm shift occurs. Traditional approaches to verifying the accuracy of information no longer correspond to the new conditions. Artificial intelligence creates new “anomalies” such as non-human “hallucinations” or false facts that cannot simply be ignored.
A transition to decentralized and cryptographically secure fact-checking systems is necessary, which can be seen as a new scientific revolutionary transition, similar to the paradigm shifts described by Kuhn.
- The modern “paradigm of reliability” (where truth is confirmed by authority/verification) no longer works in the context of AI.
- A new paradigm is needed — decentralized and reputation-oriented fact-checking.
- This can be seen as a transition to a new “normal science” in the context of the digital cyber economy.
Paul Feyerabend put forward the hypothesis that scientific methods cannot claim universality and absolutism, and that different historical and cultural contexts have different ways of obtaining and establishing knowledge. He argued that “anything goes” as long as it leads to productive results. An approach in which data and knowledge can be generated through global international crowdsourcing without the strict framework of traditional scientific methods is consistent with Feyerabend's “epistemological anarchism”.
This opens the way for more flexible, informal, and multifaceted approaches to the search for truth, such as collective intelligence, cryptographically secured data, and repeatedly verifiable hypotheses. The main thing is not to rely on a single system of evidence, but to accept a variety of methods, while creating stable and verifiable blocks of information.
- There are many competing hypotheses where truth is not absolute but relative to stability in the game of refutation.
- This is close to Feyerabend's “anything goes” — but with a clear clarification: not everything goes, but only what is well-founded and not refuted.
- In other words, he embeds anarchism into a structured algorithm (the game of hypotheses).
4. Lakatos: research programs and hypergraphs as a new structure of knowledge
Imre Lakatos proposed the concept of “research programs” which form the basis of scientific theories. An important element in this theory is the division between the “core” of the theory, which cannot be refuted, and the “belt of defense,” consisting of hypotheses that protect the core from refutation. In the digital cyber economy, this is implemented using knowledge hypergraphs, in which facts and statements are linked together within “protected” blocks, where any contradictions reduce the reliability of the system.
The flexibility of these “protective” elements allows the system to adapt, increasing its resistance to falsification and errors. This approach is reminiscent of Lakatos' idea that it is not the theory that must be “absolutely true”, but its ability to adapt and evolve in response to new data.
- The idea that hypotheses survive not as separate atoms, but as connected blocks in a hypergraph, echoes Lakatos's concept of a “program” with a core and a belt of protective hypotheses.
- Contradiction within the “program” weakens it, while consistency strengthens it.
In the traditional scientific process, knowledge gained credibility through expert evaluation and verification. However, with the development of information technology and artificial intelligence, a new concept of credibility has emerged—reputation and collective intelligence. The independence of decentralized systems such as blockchain, which allow each network participant to verify and confirm facts, is particularly important. This can be seen as a transition from authoritarian verification of knowledge (through the scientific community or government agencies) to a more democratic and “verifiable” crowdsourcing process.
Thus, the concept of reputation as a key element of trust in the digital world becomes an epistemological mechanism that enhances collective intelligence rather than relying on monopolies and authoritarian structures.
- In the digital economy “truth” loses its connection to authority and becomes a matter of trust.
- This is a continuation of the debate on “post-truth” and “epistemic trust” in contemporary analytical philosophy of science.
- The answer lies in a specific solution: a technological infrastructure of trust based on cryptography and decentralization.
6. Conclusion: new epistemological horizons in the context of information technology and artificial intelligence
Artificial intelligence models, reputation mechanisms, and cryptographic technologies are not simply changing the way science and business operate — they are provoking profound changes in the very nature of knowledge. The prospects opening up before us are reminiscent of philosophical approaches from Popper to Feyerabend, which question the idea of objectivity and a single truth, offering instead more flexible, decentralized, and interconnected approaches to knowledge.
These changes require a new understanding of epistemology, which must take into account not only the verifiability of data, but also the degree of trust in the context of technological and social change. It is important to recognize that knowledge in the 21st century is not only the result of scientific theories, but also the product of complex interactions between people, machines, and data.
- The epistemic monopoly of AI (or states and corporations) is unacceptable.
- A model of collective intelligence is needed, where knowledge is developed as a result of the interaction of many actors with different reputations.
- This is closer to modern “social epistemology” (Goldman, Fuller).
Conclusion:
In the age of digital technology and AI, the philosophy of science continues to evolve, and key figures such as Popper, Kuhn, Feyerabend, and Lakatos remain relevant for understanding new challenges. These thinkers have proposed concepts that help us navigate the complex reality of the digital cyber economy, where data verification, trust, and reputation are becoming the main criteria for the reliability of knowledge.
Now, more than ever before, we are on the verge of a philosophical revolution, where old methods of acquiring knowledge are being re-examined and adapted to new conditions. And this is not just a challenge for science — it is a challenge for our entire epistemology and way of thinking.
The CyberPravda project offers a Popperian shift towards falsifiability, but places it in a digital context. While critical rationalism was a philosophical procedure for Popper, here it is transformed into an algorithm (hypergraph + cryptography + reputation). In essence, this is an attempt to build a “machine for Popper” and connect it to social epistemology.