This post was rejected for the following reason(s):

  • Clearer Introduction. It was hard for me to assess whether your submission was a good fit for the site due to its length and that the opening didn’t seem to explain the overall goal of your submission.  Your first couple paragraphs should make it obvious what the main point of your post is, and ideally gesture at the strongest argument for that point. It's helpful to explain why your post is relevant to the LessWrong audience. 

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.

  • Formatting. If the post is badly formatted it's hard to read or evaluate. Some common issues here are improper whitespace (either not inserting space between paragraphs, or inserting double paragraph spaces by accident. (Note: when you hit 'return' in our editor it should automatically include a space, and if you copied your essay from another editor you may need to delete extraneous paragraph breaks). Sometimes this may also include grammar or punctuation. (If you're the sort of person who strongly prefers not to capitalize sentences, this doesn't automatically disqualify you from posting but we'll likely suggest at least once you switch to somewhat more formal punctuation, and if your posts are otherwise confusing we may err on the side of not approving.)

Since the rise of OpenAI, conscious artificial general intelligence, the (AGI) such as the ones brought to life in The Terminator and The Matrix have captivated the public. Nevertheless, this concept poses a paradox within computational theory.

In response to these developments, prominent figures such as Harari, Elon Musk, and experts in the field rallied support for a temporary pause in AGI development, and over 25,000 individuals signed an open letter expressing their concerns. Theory of AGI is based on the Computational Theory of Mind (CTM), one of whose central theses is functionalism, according to which mental states do not depend on the material (hardware), but solely on the input-output values they have in that system, of which they are parts. Many proponents of this theory do not reject the notion of programmable AGI. A survey conducted by Katja Grace et al. (2022) revealed that 99% of AI researchers believe that the realization of AGI is only a matter of time. Max Tegmark, a physics professor at MIT and machine learning researcher, dismisses the idea that factors other than time can impede its development as mere "coal chauvinism." He argues that we cannot predict which tasks artificial intelligence will never be able to perform.
These findings raise important questions. Finding the answer to the question of whether AGI can exist is not simply a question of whether non-carbon-based general intelligence can exist, but also whether it can be programmed. Functionalism suggests that a silicon-based human-level intelligence is possible within the realm of possible worlds, but it does not guarantee that such intelligence is programmable. Programming a machine involves defining its computational conditions, specifying how inputs translate into outputs. The concept of AGI assumes that the concept of human-level intelligence can be extended to programmable machines, implying that the computational conditions for human-level intelligence can be delineated. (This task may not necessarily be accomplished by the human mind; it is conceivable that only AI will be capable of reaping those laurels.)
Chalmers (2011) highlights the distinction between performing computation and being programmable. The best example of this is that, conscious and hypnotized human-level intelligences both engage in computation, but only the latter can be deemed programmable. During hypnosis, the mind can operate according to a program that can disregard logical rules that would otherwise create contradictions for a conscious mind, as demonstrated by the work of Martin Orne (1959). For instance, a hypnotist might suggest to a subject that he is deaf and then ask him, "Can you hear me now?" He may respond “No,” thereby manifesting “trance logic” which he would consciously consider a self-contradiction. Hypnosis creates a psychological law by the “program” imposed by the hypnotist that restricts the range of admissible mental states.
However, there are no psychological laws (Block, N. and J. Fodor 1972) or computational conditions (Churchland, Koch, and Sejnowski 1990) that govern consciousness in determining its operations. Neuroscientists suggest that while the brain receives 11 million bits of unconscious information per second, only a maximum of 40 bits reach conscious awareness, but this allows for any combination of successive mental states. Conscious mental states are those that win the competition with unconscious mental states for behavioral control, acquiring long-term effects (Dennett 1991) — and there is no theoretical limit to how mental states compete with each other. For instance, when attempting to understand the cause, purpose, or meaning of a phenomenon, we can repeatedly ask "Why?" regardless of the upcoming input answer. Similarly, the "What?" question can be employed to organize things, and the "How?" question to explore the process of achieving something. Theoretically, a conscious human-level intelligence can question any answer, provided there is no general computational condition that determines otherwise. Consequently, it is unnecessary to claim that consciousness is programmable simply because it can be regarded as a computational system.
So the question arises, if it is not a condition for the consciousness that performs computations to be programmable, then programmability is an exclusive condition for it? If the computational conditions for how an AI should respond to questions are constantly changing, will AGI be created one fine day, or are all our efforts futile, resulting only in the creation of "hypnotized" intelligences through programming?
In the Terminator vs. Jesus comedy, the Terminator continuously questioning Jesus with "Why?" queries, appears to maintain its skeptical stance indefinitely, regardless of the answers it receives. However, we cannot prove this definitively since we cannot anticipate if it will eventually run out of questions for certain responses. Therefore, questions and answers, in and of themselves, are merely inputs, states, and outputs, not computational conditions. If the Terminator were to ask, "Is there why a how?" — we might suspect that it lacks consciousness. Yet, how can we ascertain which input could provoke such a query? Until we have an answer to this question, we cannot rule out the possibility that the Terminator possesses human-level intelligence. However, it is the AGI programmer who must possess the answer since it is the condition for programming human-level intelligence. For this reason, it is necessary to establish computational conditions that determine when the AGI is satisfied with an answer and when it should question further. Without these computational conditions, there is no criterion to determine when an answer can be input for another question and when it cannot. However, the computational conditions can only be defined in such a way that if the input of the AGI is that an answer can always be questioned regardless of the input, then the AGI has an output that is satisfied with the answer, and if the AGI's input is when it should be satisfied with the answer regardless of the input, it will question each input to make sure. It is thus clear that defining the computational conditions regardless of the input for AGI leads to the halting problem. Consequently, the AGI designer is left with no alternative but to confront the impossible — the realization that human-level intelligence cannot be programmed.
The roots of the halting problem can be traced back to the work of Gödel, Church, and Turing before World War II, when these mathematicians laid the foundations of computation theory. Turing (1936) formulated the question of universal decidability that gave rise to the halting problem — which states that no program can be written that can predict whether or not any other program halts after a finite number of steps. The unsolvability of the halting problem has immediate practical bearing on software development. Subsequently, in the 1960s, with the advent and widespread availability of computers, it became apparent that many theoretically solvable problems were, in fact, unsolvable even with the most powerful computers. This realization was not due to any conspiracy of carbon-based consciousness but rather a consequence of computational conditions determined by their input. Thus, one need not favor carbon-based life to conclude that AGI programming is unachievable; its assumption leads to a mathematical-logical paradox.
While previous attempts (Lucas 1961, Penrose 1989) sought to refute the Computational Theory of Mind (CTM) by introducing the halting problem into philosophical discussions of the mind, my argument emphasizes that regardless of whether CTM is the correct model or not, programming conscious human-level intelligence would run into the halting problem. When defining the computational conditions for any intelligence, we are essentially deciding on the method of "hypnosis." Therefore, within the framework of the CTM it remains possible that if an intelligence can be programmed, it may not possess consciousness. Consequently, it is clear that artificial intelligence research has made no progress towards achieving consciousness within the realm of infinite possibilities. Even after millions of years, this situation would not change — AGI is simply impossible.

New to LessWrong?

New Comment