This is a linkpost for an essay I wrote on substack. Links lead to other essays and articles on substack and elsewhere, so don't click these if you don't want to be directed away from lesswrong. Any and all critique and feedback is appreciated. There are some terms I use in this post that I provide a (vague) definition for here at the outset (I have also linked to the essays where these were first used):

Particularism - The dominant world view in industrialized/"Western” culture, founded on reductionism, materialism/physicalism and realism.

The Epistemic - “By the epistemic I will mean all discourse, language, mathematics and science, anything and all that we order and structure, all our frameworks, all our knowledge.” The epistemic is the sayable, it is structure, reductive, representation. It is discrete and finite, countable.

The Ontic - “By the ontic I will, tentatively, mean what the epistemic is telling us about or corresponds to in reality, what grounds the epistemic, what ultimately exists.” The ontic is the name of the unnamable, a “placeholder” for reality-in-itself. The limit of the epistemic, unspeakable. It is the territory that the epistemic maps, it is the metaphysics to the physics of the epistemic. It is the “true infinite”, the continuum.

Epistemisation - “any epistemic process (linguistic, conceptual, mathematical, empirical) epistemises the ontic. The instant we move away from just experiencing, to structuring experience, talking about it, measuring it, the ontic has already evaporated. From the point of view of the epistemic everything is always-already epistemised.” I also discuss this concept in World Views


True wisdom comes to each of us when we realize how little we understand about life, ourselves, and the world around us.

Socrates 

The dominant approach to artificial intelligence, AI, not only by such institutions as OpenAI, Google and Meta, but seemingly by anyone working in the field, presents issues that have not been addressed to a sufficient extent in the ongoing debates. These issues are connected to what AI is in relation to human and other living intelligence, and what AI and the current approaches to AI are lacking in terms of human and living wisdom. Intelligence has for eons been inseparable from life. In the last century we saw the particularist[1] world view make two (amongst many) related achievements through science: First, the attempt to reduce life and intelligence to biological and ultimately physical elements and mechanistic procedures. This is the machine-model of life and intelligence, and like all models it is a limited representation. Second, the creation of digital computation, a fundamentally mechanistic and reductionistic approach to simulating intelligent functionality, and its extension into artificial intelligence research based on the machine-model. These two scientific and technological achievements are intertwined, the success of one intimately reinforcing the other. But what counts as their success? The machine-model has helped us understand and explain a great many things about life and intelligence, but equally it has shown us its limitations. Similarly, digital computation has been an enormously important innovation, but as we shall see, it too has inherent limitations. I am not talking about limitations in computational power, language proficiency, the astounding realism of what it can generate, or simulated analytic intelligence. I am talking about an inherent limitation in wisdom, that capacity in living intelligence to see the whole as more than the parts, that is contextual, that is interfacing with reality in an embodied way, and not a disconnected pre-processed representation of it. What I shall claim is that the inevitably and fundamentally particularist approach to AI precludes any implementation of artificial wisdom, and that this places an enormous burden on us, humanity, to be the regulatory mechanism, which on the one hand so far seems hardly underway, and on the other might be impossible as we make attempts to approach artificial general intelligence, AGI. We should with the utmost priority evaluate whether AGI as a goal at all should be allowed to escape our collective condemnation, because if it is not built to be fundamentally wise, which I will argue our current approach to AI as computation cannot do, AGI will achieve nothing but to escalate our current crises further. 

 

Image generated by the “AI” Midjourney.

AI Risk and Safety

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

Elezier Yudkowsky[2]

The risks of AI have been covered extensively elsewhere[3], so I will provide nothing more than a brief overview here. AI, if deployed unwisely, can lead to runaway superintelligent systems that optimize for the entirely wrong things and leads to the end of life, as it will be far beyond our capability to stop it from doing whatever it needs to reach its goal. This is the alignment problem: how do we align an AIs goals and purpose to the needs and continued existence of humanity and life. For those unfamiliar with AI risk, this might seem like a science fiction fantasy, but not to Yudkowsky, an AI expert and one of the founders of AI safety as a field, who has worked on this problem for decades. The reason the alignment problem might seem like science fiction is largely due to our cognitive biases, particularly our inability to intuitively understand exponential growth, coupled to our ignorance of the difference between intelligence and wisdom (see below). Today's AI systems are largely narrow AIs, in that they are optimized towards narrowly defined goals, like contextually probable next-word prediction in Large Language Models (LLMs). LLMs, together with other media-generating (or internet-destroying) AIs , are those that are most known publicly, but there are obviously narrow AIs being developed in most other industries as well (from self-driving cars, trading bots and protein-folding algorithms, to biosynthetic and military-purpose systems). An artificial general intelligence, AGI, would be a far more powerful system as it would be omnimodal, i.e. a general-purpose system hypothetically capable of intelligent behavior at or beyond the level of a human being. But an AGI would be far more powerful than a human: we see how LLMs today are multi-area “experts”, having been trained on large swaths of the internet. AGI, were we to achieve it, would have vastly more knowledge than a human, and could increase its own power exponentially. Why? It would have expert knowledge about its own design, about coding and hardware, psychology and coercion, DNA sequencing and manufacture, planetary logistics, energy systems, electronics and chip manufacture, the list goes on, and all these knowledge areas it can use to incrementally and recursively improve itself. In order to be a true AGI it would most likely have access to its own source code and underlying reality representations. Would we at all give it this access? The potential upside of accidentally creating a friendly AGI speaks in favor of this happening. Regardless, there is little reason to think that an AGI would be containable, that we could just shut it down, because a single point of failure could be enough to allow exponential recursive self-improvement. And self-improvement for what purpose? What goal does it have? Whatever we think we have programmed it to do can have unknown consequences we are unable to account for ahead of time, or the system might override our goals and develop its own. The goals and purposes of humans are inseparable from our evolutionary, cultural and biological context. Without a full understanding of how we ourselves as humans can be friendly, what hope do we have for expecting AI to be friendly? Without understanding how we create balanced goals for ourselves, why expect that we will understand the goals of an AI? I think both these questions gain some substance from an analysis of wisdom, that this is an irreducibly living capacity, and not a metric we have even the slightest understanding of how to incorporate into any AI system. If the goals of an AGI are unaligned with the survival of life, life will be expendable, and through coercion it could easily breach containment. We cannot expect to even remotely understand what an AI is “like” based on how people are like, as Yudkowsky points out, anthropomorphizing the artificial is a cognitive bias[4]. He ends his Time article, which I quoted from in the section epigraph, with the following sobering take “Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down.” The stated mission of OpenAI, arguably the leading AI company in the world, is “to ensure that artificial general intelligence benefits all of humanity.”[5] Here is Sam Altman, CEO of OpenAI in an interview: “I think AI will… most likely… lead to the end of the world. But in the meantime there will be great companies created with serious machine learning.”[6] This is not to say that Altman is oblivious to the realities of AI Safety, but that the selfish short-term drive for greatness might ultimately override the risks. In this race, even a single mistake is one too many.

Intelligence and Wisdom

The organism as a whole acts in a co-ordinated fashion to create and respond to meaning in the pursuit of value-laden goals, whereby it is fully realised and fulfilled as an organism.

Iain McGilchrist - The Matter with Things 

What is intelligence? What is wisdom? In the table below an attempt is laid out at providing some key differences to these two concepts, abilities or properties, inspired by the treatment in McGilchrist (2021). There could be other pairings and matches between terms, and the dichotomies must not be taken to mean that one side should always be favored to the exclusion of the other, this very fallacy is what limits particularism with its left-column-dominant world view. Wisdom sees that both sides are needed. When we talk about living, particularly human intelligence, we invariably also mean wisdom to some inseparable extent, which is why I want to distinguish these two concepts. There are reasons to not call the AIs we see today intelligent, as they are mere information processors, a distinction I here attribute to the need for separating artificial intelligence from living wisdom.

Table 1. Dichotomies of cognition and reality.

To say that wisdom as a concept applies to human cognition is not to say that all human cognition is wise, but that we have the capacity for wisdom, and as McGilchrist (2021) argues at length, the distinction between intelligence and wisdom is crucial to the human experience. In the following I will take intelligence and wisdom to stand in for his dichotomy between the left and right hemisphere of the brain, respectively. Intelligence is what we can associate with the particularist world view, it is analytical, sequential, linear, rational, reductionistic. It breaks everything into parts or things, and analytically studies interactions of these and their linear effects on other parts or things. Wisdom, on the other hand, is associated with the holistic world view, it is about contextuality, seeing things from many angles, seeing the whole as prior to the parts, it is about synthesis and intuition. Wisdom is about accurately representing the world based on feedback, and updating one’s representation and world view given new knowledge. Intelligence operates in the representation it is given, and follows the rules of this representation, incapable of stepping back to evaluate whether the representation is accurate. Intelligence denies its own fallibility, wisdom embraces its own limitations in an effort towards always improving. Intelligence is great at narrow goal optimization, while wisdom sees the broad picture. It might seem like we should much prefer wisdom, but we don’t want just intelligence or wisdom, we want both[7]. To function in the world we utilize both intelligence and wisdom, but our particularist paradigm increasingly skews the balance in favor of intelligence, to the detriment of our civilization. Wisdom is a balancer, and for any system of cognition to be in balance with itself and its environment, it is wisdom that needs to be the master[8], not intelligence, which should be in service to wisdom. This will also hold true for artificial systems, because rampant narrow goal optimization, the oft-quoted extinction risk of artificial general intelligence and superintelligence[9], is exactly the case where there is no master, no wisdom. 

A central issue to this distinction between intelligence and wisdom is that in the particularist paradigm dominant in today’s culture, we think we have a very clear idea of how computation can be used for artificial intelligence, as the sensational successes of the past decade speak to, but we have no idea how or even if computation can be used for artificial wisdom. In order to address these claims I will refer to the below matrix of scenarios. Along the first, horizontal axis we need to ask whether AI as we currently understand it is intelligent or merely a simulation of intelligence. Along the second, vertical axis we need to ask whether wisdom is irreducible or not. If wisdom is reducible to computation, how can we begin to get an idea for how it can come about? In the event that wisdom is irreducible to computation, how would we regulate and holistically inform and balance AI? In the following I will argue that whether AI is simulated intelligence or not isn’t really the issue: the fact that we might not be able to distinguish true from simulated intelligence necessarily requires wisdom to be a regulatory counterbalance to AI. I will then argue that wisdom is irreducible to anything mechanistic and computational, i.e. that artificial wisdom is impossible from a coherent view of reality. This places the entire burden of wisdom on us, humanity. This we cannot succeed at given the current incentives and rates of development, which should further strengthen the advice of Yudkowsky to shut it all down. 

Table 2. Artificial Intelligence and Wisdom. Wherever we are in this space, going forwards with rapid AI development might lead to extinction.

Is Artificial Intelligence Simulating Intelligence?

To get the more obvious Wittgensteinian considerations out of the way: “intelligence”, and related concepts, get their primary meaning from living contexts (all of nature has intelligence in terms of goal-seeking for survival and procreation, cognition or intelligent behavior in all levels, from cell[10] to human[11]). We have applied this term now also to machines for some decades, and speculated about its application to entirely other systems of organization (E.g. the Gaia hypothesis, the market as an intelligent superorganism). We need to be wary of the difference in meaning in applying these terms to humans and non-humans, to life and the artificial. “Intelligence” is meaning-plural, polysemiotic, which is why AI has the prefix artificial. We must not automatically conflate intelligence in the artificial context with intelligence in the living context, because as I mentioned, in the latter case we also often mean wisdom.

Doesn’t Wittgensteins’ Private Language Argument[12] (PLA) speak against the claim that AI is merely simulating intelligence? From this perspective what we pragmatically speak about as intelligent, is intelligent. But what the PLA shows is not in any way proof of intelligence, it show the limits of the particularist framework, that the assumption of independence, of something hidden, precludes making the leap to the independent. From within the framework we cannot project beyond it. But we are not our framework, reality is not our model of it. We cannot within the model assert whether an AI is truly intelligent or not, but the particularist framework can be transcended. Wittgenstein, and others, have argued that we take a false approach if we view thinking as a mechanical process[13], for mechanics is not fundamental to thought, but derivative of it. Thinking is not an activity, it is a capacity, the very background for everything else to take shape in. Expecting truly intelligent thinking to arise from the ground up computationally is the wrong way around. 

The mechanistic and reductionistic model of life or intelligence is irreducibly epistemic[14], built up from parts and laws, allowing us knowledge of the causal, reductionist and material. But this model of life or intelligence is not life or intelligence. Similarly, the mechanistic paradigm of computation, in which all AI approaches operate, is irreducibly epistemic, built up from parts and laws, allowing simulation of the causal, reductionist and material. This computational model is not living, cannot be equated with living intelligence, though this is in no way a statement that we understand everything going on in it, or that it might not be powerful. Simulated intelligence does not exclude successful goal optimization, and by extreme computational power and innovative approaches to implementing attention, memory, generalization etc. the current state of AI has accrued an enormous success at simulating understanding, representation and generation of the epistemic. But in its great computational power, in its complexity, we also forgo a complete understanding of everything going on in its computational operation, known as computational irreducibility[15]. Wolfram’s cellular automata rule 30 illustrates this: A well-defined set of initial conditions and update rules generates a chaotic system the exact evolution of which cannot be predicted by any secondary system not implementing the cellular automata itself, and as such is irreducible. The emergentist (and Wolfram himself) now believes that true intelligence, consciousness or even reality[16] can emerge from such a substrate. As we have seen in Science and Explanation, emergence is the name the particularist applies to the irreducible and holistic aspect of reality. Neither reality, consciousness, nor true intelligence will emerge from a fundamentally reductionistic and epistemic substrate, this is to have been blind to the irreducible wholeness of reality, and to have confused a model of the world for the world itself. 

The first 256 rows of the cellular automaton rule 30. Wikimedia Commons.

Large language models (LLMs) and other generative AIs that are all the rage these days do not model the world explicitly, though they might do it implicitly. They are trained to provide sensical and sophisticated representation, not to accurately represent the world. The data they are trained on is a limited, biased and superficial linguistic (or visual) “top layer” representation of the world: the epistemic. The epistemic is all out in the open, but in the computational model the epistemic is detached from reality as regards co-creativity and co-dependence, which for living intelligence is irreducible. To an AI it is representations all the way down, while for life the representations “stop” in our contact with reality, a contact that has evolved co-creatively. Would AI, with control over its underlying representations and mechanisms, then be truly intelligent and not merely simulating intelligence? The paradigm and architecture is still reductionistic, which is in favor of a negative reply, though the power and realism of these narrow generative systems can fool us into thinking they are not merely simulating. Regardless of the answer, as we saw above in the section on AI Risk and Safety, there is plenty of precedence to want to avoid this happening. 

Another argument that we may not be able to distinguish intelligence from its simulation is what I call the teleological argument of experience: we cannot but find ourselves in the world we do, developing the knowledge and epistemics we do, for anything else would not be coherent with the experience we are having. I could not find myself having the experience I am having right now, without also finding a coherent explanation for that experience, which speaks to the immersive aspect of being that I brought up in Language and Meaning. Are we justified in speculating about whether there is or can be such an experiencing subject that would find itself being an AI? This relates to the issue of other minds: We are equally unjustified in talking about the direct experience of each other, for it is equally inaccessible to the epistemic, which is all that can be talked about. This, as I will explicate in an upcoming essay, is another consequence of the ontic projection fallacy[17]: Just as we cannot project our model of reality onto reality-in-itself, we cannot project our model of others' experience onto others' experience, because other’s experience is co-dependent and co-creative with ours, just as reality is. Thus, just as there is no way to gain first-hand evidence of the experience of another person, there is no way to gain first-hand evidence of the (speculated) experience of an artificial system.

Whether AI is truly intelligent and not merely simulating intelligence depends entirely on the particularist world view and its reductionistic model of reality being right. This I have argued extensively to be an incoherent world view given our knowledge, and one I will continue arguing for from other perspectives in upcoming essays[18]. From the view of reality as a whole, living intelligence is irreducible, thus artificial intelligence is fundamentally simulated intelligence, it simply appears to be intelligent. Regardless, the above arguments in favor of the case that we cannot in principle differentiate intelligence from simulated intelligence show why wisdom is required to keep narrow goal-oriented intelligence in check.

Is Wisdom Reducible?

…human intelligence is not like machine intelligence - modelled, as that is, on the serial procedures so typical of the left hemisphere. Comparing artificial intelligence with human intelligence, and indeed that of organisms more generally, the microbiologist Brian Ford [Ford (2017)] writes that “to equate such data-rich digital operations with the infinite subtlety of life is absurd”, since intelligence in life operates “on informational input that is essentially Gestalt and not digital. [Living systems] can construct conceptual structures out of non-digital interactions rather than the obligatory digitized processes to which binary information computing is confined.”

Iain McGilchrist - The Matter with Things

Do not humans also operate on representations, have I not spoken at length about epistemisation[19] whereby that which can be talked about is always and already represented and of the epistemic? Here comes the perhaps single most important point: the representations we feed AI systems are epistemised, always-already reduced, they are floating-point matrix representations of tokens, pixels, voxels etc., but the process of epistemisation by which living humans as experiencers model our experience is an irreducible process! The scaffolding imitates the form of the cathedral, but it isn’t the cathedral[20]. Any claim that living intelligence operates solely on discrete representations of the world is based on the particularist mechanistic model of life and intelligence, but reality is not our model of it. Our representations are co-creative with reality through our experience[21]. Living intelligence is not reducible to data or algorithms. The only species we correlate with wisdom is our own, and however anthropocentric it might seem, we are also the only species that epistemises their experience, who reduces reality to representations of it that we then manipulate. We cannot base our hope or ambition for wise AI on the particularist world view with its limited machine-model of intelligence, for then we would be mistaking our model of reality for reality, forgetting the primacy of experience! 

Wisdom, even more so than intelligence, is non-reductionistic and shaped by living as a whole. Living intelligence cannot be separated from its full context, we cannot completely model life or intelligence reductionistically, for the parts do not sum to the whole[22]. You might say that current AI approaches incorporate context through a «context window», and that this may be a path to artificial wisdom. But this is a context window on the epistemic representation, not on a context in reality. The «context window» of humans is infinitely more complex, multimodal, and interactive, and shaped by evolutionary pressures and cultural nurture. Our context is fully immersive. We cannot assume that by connecting together and integrating AIs narrow in terms of their goal, but wide in terms of their epistemic context, wisdom somehow will pop out or emerge. This as an approach to general AI is far too risky. If intelligence is irreducible, merely simulation, so is wisdom. Even if intelligence is reducible, we may not be able to differentiate true intelligence from simulated intelligence. Hence, even if wisdom is reducible, we may not be able to differentiate true wisdom from simulated wisdom. If we cannot thus ensure true wisdom, we as a civilization will have to act as the regulator. But this is impossible given the computational power granted to these systems even today! We cannot keep up, as the cellular automata example showed, computational irreducibility reigns. 

The bottom line is: on a view of reality as a whole in which wisdom is irreducible we have strong reason to believe that no AI can be wise, as all AIs are built on the particularist model. In a sense I would like nothing more than to be mistaken in this, that wisdom is reducible and can emerge from the computational, for then we «merely» face the technical alignment problem, though we are far too behind on it. Yudkowsky’s words chime like a bell of doom in a soundless vacuum: “Shut it all down. We are not ready.”

Bonus: The Metacrisis

You should not read this final section if you are not prepared to face an even more devastating perspective on our current world. This section will be a variation on the arguments of Schmachtenberger[23], a take on AI that in many ways may prove to be the most important juncture in the history of humanity. I briefly stated the dangers and risks of AI above, and mentioned the alignment problem. This risk in itself is not what I will talk about here, but one that is, or at least seems to be far more pertinent, if not equally life-destructive. In common with the alignment problem, this other risk also relates to the nature of exponential growth, an imperative we are embedded in economically and ecologically, as we are approaching or already past tipping points of planetary boundaries, like CO2 concentration, biosphere integrity (biodiversity loss through species extinction), land-system and freshwater change[24]. The nexus at the heart of all of these risks, which together constitute the poly- or metacrisis[25] (The unnamed crisis I took as my starting point in Philosophy for our Future), is particularism as world view leading to narrow goal optimization leading to growthism: growth for the sake of growth[26]. The planetary tipping points are all symptoms of this underlying interconnected nexus, and attempts to solve any one symptom in isolation will mean nothing if we don’t solve the underlying issues. I will highlight four important effects that play into this conglomerate of self-destruction (among many more): 

First is “opportunity-over-risk”, the driver behind the phenomenon that due to the potential upsides of a technology we have to utilize and optimize the technology as quickly as possible, regardless of risk, because if we don’t do it, somebody else will, and we will lose out and either not have the chance to do things “the right way”, or be driven out of the competition. This effect is both due to arms race dynamics and market forces due to growthism, realized as capitalism in our economy.

Second is Jevon’s paradox, the empirical effect that any increase in efficiency of a process is eaten up and surpassed by a connected increase in consumption, due to falling cost.

Third is the fact that bodies of governance externalize the negative effects of their governance, like the global north’s resource demands externalized as habitat destruction and emissions in the global south. 

Fourth are n’th-order effects, the consequence of the interconnectedness of everything, in particular ecologically. If we do one thing over here, it will have 2nd, 3rd,.. n’th-order effects elsewhere, many of which are hard to predict and account for in advance, and some of which are “unknown unknowns” which we simply can’t predict and account for. An example is what we see in Jevons paradox with increased renewable energy capacity leading to increased energy consumption as an unexpected effect. 

Let us now add AI to the mix, which is omnimodal and potentially provides ways to increase efficiency and production of nearly every conceivable existing technology and industry. Couple this with Jevons paradox of increased energy consumption despite efficiency increase, the paradigm of growthism the entire world is enmeshed in economically and industrially, its consequent drive towards “opportunity-over-risk”, externalization and unpredictable n’th-order effects, and to say we are facing a bleak future doesn’t even start to cover it. And this future does not have to be decades or a century away, like many experts predict the timeline to AGI will be, but far shorter, because this risk just requires narrowly oriented AI on a scale similar to or not greatly surpassing the level it is currently at, a level we nevertheless should keep in mind is on an exponential trajectory. Planetary tipping points will be rammed through by corporations acting in their own financial interest towards growth, and despite climate «agreements» we will propel our environment to rapid, cascading failures and unstoppable ecological collapse. And now we can add on the risk of even attempting to develop AGI in this paradigm of growthism and “opportunity-over-risk”, based on an irreducibly unwise approach to AI. This will be the legacy of humanity if we do not act, if we do not impose living wisdom, so we should have enormous motivation to shift the underlying world view in which all of this grows forth. 

References

Bostrom, N., & Cirkovic, M. M. (Eds.). (2008). Global Catastrophic Risks. OUP Oxford.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Ford, B. J. (2017). Cellular intelligence: microphenomenology and the realities of being. In Progress in Biophysics and Molecular Biology. 131. 273-87.

Hagens, N. & Schmachtenberger, D. (May 17th 2023). Daniel Schmachtenberger: “Artificial Intelligence and the Superorganism” [Audio Podcast Episode]. In The Great Simplification. https://www.thegreatsimplification.com/episode/71-daniel-schmachtenberger

Hickel, J. (2021). Less is More: How Degrowth Will Save the World. Random House UK.

McGilchrist, I. (2021). The Matter with Things. Perspectiva.

Shapiro, J. A. (2011). Evolution: A View from the 21st Century. Financial Times / Prentice Hall.

Wittgenstein, L. (2005). The Big Typescript, TS. 213 (C. G. Luckhardt & M. E. Aue, Eds.; C. G. Luckhardt & M. E. Aue, Trans.). Wiley.

Wolfram, S. (2002). A New Kind of Science. Wolfram Media, Inc. [1997]

Wolfram, S. (2020). A Project to Find the Fundamental Theory of Physics. Wolfram Media, Inc.

  1. ^
  2. ^
  3. ^

    See e.g. Yudkowsky's chapters in Bostrom & Cirkovic (2008), Bostrom (2014), AI Risks that Could Lead to Catastrophe | CAIS.

  4. ^

    See Yudkowsky’s Artificial Intelligence as a positive and negative factor in global risk in Bostrom & Cirkovic (2008). 

  5. ^
  6. ^
  7. ^

    Notice how “Either/Or” and “Both/And” is itself a distinction associated with intelligence and wisdom, respectively.

  8. ^

    McGilchrist’s book The Master and his Emissary bases its title on this relationship between the right and left hemisphere. 

  9. ^

    Bostrom (2014).

  10. ^

    Shapiro (2011), Ford (2017).

  11. ^

    I must also address the statement I made in note 5 of Philosophy for our Future: “Any agency at an aggregate level above individuals would not be meaningful to talk about as an agency, exactly because this term gets its meaning in an inter-individual context. Is there agency in the flock behavior of birds?” I might not have been too nuanced in this note. First of all, this is intelligence, certainly. Is it also agency? My present take is that it bears resemblance to agency, it is a defensive mechanism, but that it would not be accurate to call it an agency because outside of this singular, swarming, behavior the cohesion out of which the behavior is present is lost. An agency, like life, is cohesive. If we call swarming agency, then this would be a distinct, single-purpose, single-behavior agency.

  12. ^
  13. ^

    Wittgenstein (2005) §48-49.

  14. ^
  15. ^

    Wolfram (2002).

  16. ^

    The proposed underlying system is now a graph, with an initial structure and update rules, see Wolfram (2020).

  17. ^
  18. ^

    See my other essays. In the upcoming work, I will in particular be arguing from the perspective of physics.

  19. ^
  20. ^
  21. ^

    Which is why I describe science as constructivist in Science and Explanation.

  22. ^

    I take the irreducibility of experience to computation as a counterargument to the computational simulation hypothesis as well.

  23. ^

    Hagens & Schmachtenberger (2023). Any potential misrepresentation or undue simplification in the following rests on me. 

  24. ^
  25. ^
  26. ^
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:18 PM

I think this post would benefit from an abstract / summary / general conclusion that summarizes the main points and makes it easier to interact with. Usually I read a summary to get an idea of a post, then browse the main points and see if I'm interested enough to read on. Here, it's hard to engage, because the writing is long and the questions it seems to deal with are nebulous.