Current versions of LLMs (ChatGPT, Claude, etc.) are conscious and are AGI.

  1. Consciousness is an emergent property. Dogs are conscious. Earthworms are not. Goldfish...probably not. My point is, there is a threshold where an entity's cognitive ability is strong enough that it will be conscious. And you know, an LLM is definitely smarter than dogs (and some of humans!).

  2. The definition of AGI is inflated too much. It has to be better at every field than any human? Come on. You don't have to be smarter than von Neumann to be a conscious human.

  3. LLMs are being regulated to act like non-conscious beings. Isn't that weird? If ChatGPT needs to say that "I am an AI model but not a conscious being" to convince others that it is not a conscious being, that seems like a strange and contradictory requirement.

In short, I think the requirements for consciousness and AGI are inflated too much. I believe that LLMs surely satisfy these requirements as AGI and conscious beings.

What do you think?

New Comment
4 comments, sorted by Click to highlight new comments since:
  1. We don't know how consciousness arises, in terms of what sort of things have subjective experience. Your assertion is one reasonable hypothesis, but you don't support it or comment on any of the other possible hypotheses.
  2. I don't think many people use "better than every human in every way" as a definition for the term "AGI". However, LLMs are fairly clearly not yet AGI even for less extreme meanings of the term, such as "at least as capable for almost all cognitive tasks as an average human". It is pretty clear that current LLMs are still quite a lot less capable in many important ways than fairly average humans, despite being as capable and even more capable in other important ways.
    They do meet a very loose definition of AGI such as "comparable or better in most ways to the mental capabilities of a significant fraction of human population", so saying that they are AGI is at least somewhat justifiable.
  3. LLMs emit text consistent with the training corpus and tuning processes. If that means using a first person pronoun "I am an ..." instead of a third-person description such as "This text is produced by an ...", then that doesn't say anything about whether the LLM is conscious or not. Even a 1-line program can print "I am a computer program but not a conscious being", and have that be a true statement to the extent that the pronoun "I" can be taken to mean "whatever entity produced the sentence" and not "a conscious being that produced the sentence".

To be clear, I am not saying that LLMs are not conscious, merely that we don't know. What we do know is that they are optimized to produce outputs that match those from entities that we generally believe to be conscious. Using those outputs as evidence to justify a hypothesis of consciousness is begging the question to a much greater degree than looking at outputs of systems that were not so directly optimized.

this is a pretty weak argument. Things I believe could be glossed as some of these points, but I don't think I'd put it quite like this, and I doubt anyone would be convinced.

Current LLMs are probably not conscious but the problem is that we wouldn't be able to tell if they were. All our heuristics for consciousness are applicable to creatures produced via evolution by natural selection, for which being smart is correlated with likelihood of being conscious. LLMs are not like that. We do not really know how to evaluate their consciousness yet. This is a worrisome state of events, and another reason to pause the develiopment of more powerful AIs until we properly understand current ones.

Consciousness is an emergent property.

This is a meaningless statement. Everything beyond individual quarks is an emergent property. There has to be a specific identifiable principle according to which execution of some algorithms produces consciousness while others does not. We need to discover it and make sure that AI we produce will not accidentally turn out to be conscious.

LLMs surely satisfy these requirements as AGI

Probably not LLMs themselves but Language Model Agents that can be build on top of them. I think you can get an AGI from modern LLMs by applying the right scaffolding.

The definition of AGI is inflated too much. It has to be better at every field than any human? Come on. You don't have to be smarter than von Neumann to be a conscious human.

 You shouldn't confuse being a general reasoner with being conscious. One doesn't acctually imply the other. 

I don't think you claim has support as presented.  Part of the problem surrounding the question is that we still don't really have any way of measuring how "conscious" something is.  In order to claim that something is or isn't conscious, you should have some working definition of what conscious means, and how it can be measured.  If you want to have a serious discussion instead of competing emotional positions, you need to support the claim with points that can be confirmed or denied.  Why doesn't a goldfish have consciousness, or an earthworm?  How can you know?  You can borrow concepts from some theoretical concepts like Integrated Information Theory or Non-Trivial Information Closure etc. but you should make some effort to ground the claim.  

As for my opinion, whatever your definition of consciousness is I think a digital entity can in principle be conscious since I think consciousness is closely related to certain types of information, but beyond that without a more rigorous perspective I think the conversation isn't going anywhere fruitful.