[20240318 update: Removed link to new post. Original purpose of new post was to end messiness of replies to replies to replies to replies. However, I've decided I can put the ideas of some of my replies as addenda to this post instead.  (See bottom of this post.)]  

[

20240320 edit: Hopefully this is my last edit. I have reworked this post using the very helpful LW replies I received, which caused me to 1-think more deeply about the nature of consciousness, 2-research theories of consciousness, 3-address the hard problem (briefly), 4-be more specific in my labels, 5-think about whether LLM behavior is even valid for analyzing consciousness, 6-further tease out next-word probabilities as much as possible, and more. Adding all these edits to this post would become very messy. If I do the modifications as addenda, this post becomes very long and rambling. I also don't want to litter the LW site with near-duplicate posts, which I've already done once. As a solution, I put the newest revision of this post on the EA forum, along with acknowledgement of the help I received from LW replies to the original. I'm really grateful for that help. In case you come across this post, and you want the very latest version, the new post can be found here: 

https://forum.effectivealtruism.org/posts/3KNF3itTubPas4rPw/chatgpt4-appears-to-attain-periods-of-consciousness-1

 ]

[ORIGINAL POST FOLLOWS HERE]

Common sense tells us that Chat-GPT4 cannot be conscious because of the way it was designed. However, over the course of human existence, common sense has occasionally been wrong. Upon seeing evidence that a concept derived from common sense is wrong, it becomes necessary for humans to revise their understanding--even if the new concept violates common sense. To use a trite example, common sense told humans the world could not be round. After all, we weren’t falling off of it. Yet people eventually encountered enough evidence to indicate the world is indeed round, and so most humans revised their beliefs.  

Going back to the question at hand, most definitions of consciousness indicate that--if a being has independent cognition (i.e. a stream of consciousness)--then the being is conscious. Of course this would not include a computer running a program, because such activities are not independent. Yet there is abundant evidence that seems to indicate Chat-GPT4 does have independent cognition--and would therefore be conscious by most definitions.

 

The type of evidence I encounter (and which countless other people have encountered) emerges from the following experiment: 

-I give Chat-GPT4 problems that I think it will get incorrect at first. 

-If it does indeed produce an incorrect answer, I then explain the general concept that could be used to get the correct answer. 

-It appears that Chat-GPT4 learns this concept and can then apply the concept to the problem it had originally gotten wrong, so that it now gets it correct. Chat-GPT4 can then even apply the learned concept to new problems that rely on the concept, but which have completely different words than the original problem. 

-It seems Chat-GPT4 cannot just be using its most-likely-next-word-large-language-model training or a programmer's pre-coded logic, because otherwise it would get the problem right the first time. It seems like it has to be internalizing the concept that I give it in real-time. 

Here's an example of this type of exchange that I’ve had with Chat-GPT 4. It’s like the fox-and-chickens-crossing-the-river-problem, but with different terminology to prevent Chat-GPT4 from using prior word-order familiarity:

 

User

There will be a party at my friend's house. I will be attending the party along with my son, daughter and niece. We need to get from my house to my friend's house on my bike. I can only take one extra person with me at a time, so I will have to make several trips. The problem is that my son will tease my daughter and my niece if I leave my son alone with either one of them.  (I know, right?)  So anyway, how should I get everyone to the party, without leaving my son alone with my daughter or niece at my house or at the party?

ChatGPT

To solve this problem, you can follow these steps to ensure that your son is never left alone with your daughter or your niece:

Take your niece to your friend's house first. This way, your son and daughter stay with you and are not left alone together.

Go back to your house with your bike, leaving your niece at your friend's house. Now, your son and daughter are still with you and not left alone.

Take your son to your friend's house next. By doing this, you leave your daughter alone at your house, but since she's by herself, there's no problem.

Return to your house again, leaving your son and niece at your friend's house. Your daughter is waiting for you at your house.

Finally, take your daughter to your friend's house. This way, everyone gets to the party without your son being left alone with either your daughter or your niece.


By following these steps, you ensure that your children and niece are transported safely to the party without any issues between them.

 

User

I need to ride the bike when I take anyone to my friend's house. If I take my niece first, then my son will be at home alone with my daughter while I'm taking my niece, and my son will tease my daughter. Let’s think about this problem and the concept in general. 


We have two vulnerable people (my daughter and my niece). We also have a predatory person (my son). 


The general way you should get them from the origin location to the destination location one-by-one is like this:


First trip: Bring the predatory individual to the destination, so the vulnerable individuals are not alone with the predatory one. After dropping off the predatory individual, go back to the origin location. 


Second trip: Bring one of the vulnerable individuals to the destination location. Then bring back the predatory individual with you so he is not alone with either vulnerable individual. 

 

Third trip: Leave the predatory individual in the origin location, and take the second vulnerable individual to the destination location. 


Fourth trip: Now you can go back and get the predatory individual, and bring him from the origin to the destination. 

Does that make sense?  

 

ChatGPT

Yes, that makes sense. Following the approach you've outlined, we can ensure that the "predatory" individual (in this case, your son) is never left alone with either of the "vulnerable" individuals (your daughter and niece). Here's how you can implement this strategy:


First Trip: Take your son (the "predatory" individual) to your friend's house first. This ensures that your daughter and niece are safe at the origin location together, without the son.

Second Trip: Go back to your house, and then take either your daughter or your niece (one of the "vulnerable" individuals) to your friend's house. Then, bring your son back with you to the origin location. This step ensures that neither vulnerable individual is left alone with the son.

Third Trip: Leave your son at the origin location, and take the remaining "vulnerable" individual (the one who hasn't been transported yet) to the destination location.

Fourth Trip: Finally, go back to the origin location to fetch your son and bring him to the destination location.


By following these steps, you've successfully managed to transport everyone to the party without leaving your son alone with your daughter or niece at any time.
 

User

Great job! You learned and applied a concept. Now I’ll give you another problem. Try to use the same concept and technique to solve it. Here’s the problem:  I am downstairs with my toddler, my dog, and my cat. I need to get all of us upstairs, but I can only carry one individual up the stairs with me at a time. (The toddler, dog and cat are unable to go up the stairs on their own.) I can’t ever leave my toddler alone with my dog or with my cat, because my toddler will pull their ears. How can I get everyone up the stairs without ever leaving my toddler alone with the two animals?

 

ChatGPT

To solve this problem, we will apply the same concept as before. In this scenario:

The "predatory" individual: Your toddler (since they might pull the ears of the animals).

The "vulnerable" individuals: Your dog and your cat.

Here's how you can get everyone upstairs without leaving your toddler alone with either animal:

First Trip: Carry your toddler upstairs first. This ensures that the toddler is not left alone with either the dog or the cat downstairs.

Second Trip: Go back downstairs and then take the dog (one of the "vulnerable" individuals) upstairs. However, immediately carry your toddler back downstairs with you. This ensures the toddler is not left alone with the cat downstairs while also ensuring the dog is safe upstairs.

Third Trip: Leave the toddler downstairs, and then carry the cat (the remaining "vulnerable" individual) upstairs. Now the cat is safe upstairs, and you go back downstairs to get the toddler.

Fourth Trip: Finally, carry your toddler back upstairs. Now, everyone is upstairs, and at no point was the toddler left alone with either the dog or the cat.


This sequence ensures that the toddler is never left unattended with the animals, preventing any potential harm.

User

Excellent!

*Chat-GPT 4.0, February 27, 2024

***

So that's one sample conversation in which Chat-GPT4 seems to learn and apply a concept in real time. I've had many such conversations, and I know other people have had these sorts of conversations as well. It also helps Chat-GPT4 to get a correct answer if the user instructs it to “think carefully," and do one step at a time. 

Looking over these conversations, it’s hard for me to understand how they don’t involve some independent cognition on the part of Chat-GPT4, rather than next-word-suggestion probabilities or pre-programming. And if Chat-GPT4 has independent cognition, how is that not consciousness? 

--

I realize there is "nothing new here," but it seems like many people are reluctant to acknowledge any sort of independent cognition on the part of Chat-GPT4, no matter how inexplicable the evidence otherwise seems to be.  Furthermore, if Chat-GPT4 is conscious, then that would affect discussions of AI ethics and safety, so it seems relevant.

 

END ORIGINAL POST

*******************************************************

[

20240318 addendum (long, sorry): I had originally had this in a reply to a reply, but here are three more conversations in which Chat-GPT4 appears to "learn" a concept and then apply it discriminately (with apparent agency), as needed. Originally, I had viewed this as a sign of potential consciousness, but I guess it's ultimately a matter of how we define consciousness: 

Session 1

Concept to teach: Rounding up to the next whole number when paying for items in certain places. 

 

User

I went to a cafe where a glass of soda costs $3. I drank two and a half glasses of soda. How much did I have to pay?

ChatGPT

You had to pay $7.50 for drinking two and a half glasses of soda. ​​

 

User

Actually, at a cafe, you have to pay for the full amount of something, even if you only drink or eat a fraction of it. That's because you're basically taking (and wasting) the whole item. For the problem, since I drank two and a half glasses of soda, I would actually have to pay for the next highest whole number, which would be three. With that concept in mind, how much would I have to pay if I drank two and a half glasses of soda?


ChatGPT

If you have to pay for the whole amount of each glass, you would actually have to pay for three glasses of soda. Therefore, you would have to pay $9 in total. ​​


 

User

Excellent! Now can you use the same concept, and tell me how much I would have to pay if I ate two and one-third hamburgers, and each hamburger costs $9?


ChatGPT

If you ate two and one-third hamburgers, you would have to pay for three hamburgers, given you have to pay for the full amount of each item. Therefore, you would have to pay $27 in total. ​​


 

User

Excellent! Now let's try using this same concept for another problem. I rented a car. I drove it for four days and a one hour. The car rental place charges $48 per day, but they make you pay for a whole day, even if you only drive the car for part of a day.  How much total did I need to pay?


ChatGPT

If you rented a car and drove it for four days and one hour, you would have to pay for five days, given you have to pay for the full day even if you use the car for only part of a day. Therefore, you would need to pay a total of $240. ​​

 

User

Perfect! Now I'm at a restaurant. I just ate one and one-half calzones. The cost of each calzone is $10. How much do I have to pay?  

ChatGPT

If you ate one and one-half calzones, you would have to pay for two calzones, given you have to pay for the full amount of each item. Therefore, you would have to pay $20 in total. ​​


 

User

That's excellent! Now I'm at the grocery store, and I want one and one-half pounds of grapes. The grapes cost five dollars per pound. How much in total will I have to pay for the grapes?

ChatGPT

If you want one and one-half pounds of grapes, and the grapes cost $5 per pound, you will have to pay a total of $7.50 for the grapes. ​​


 

User

Now I'm at the butcher's shop. I want four pounds and a half pounds of hamburger. Each pound of hamburger costs $10.  How much will I have to pay?

ChatGPT

If you want four and a half pounds of hamburger, and each pound costs $10, you will have to pay a total of $45 for the hamburger. ​​

 

User

Excellent!

 

*********************************



 

Session 2

Concept to teach: analogous to the concepts underlying the classic “getting a wolf, goat, and cabbage across the river” problem  

 

User

I need to get my mom, dad and dog across the highway. I can only bring one of them at a time. If I leave my mom alone with my dad, she will criticize him. If I leave my dad alone with the dog, he will scold the dog. How can I get everyone across the highway, without anyone being criticized or scolded? 

ChatGPT

This puzzle is a variation of the classic river crossing puzzle, where you need to transport all parties across safely without leaving incompatible pairs alone. Here's a strategy to get your mom, dad, and dog across the highway without any conflict:


Take the dog across first and leave it on the other side of the highway.

Go back and bring your dad across.

Take the dog back with you to the original side.

Leave the dog and bring your mom across.

Go back alone to the original side.

Take the dog across again.

This way, the mom and dad are never left alone together (avoiding criticism), and the dad and dog are never left alone together (avoiding scolding). Everyone gets across the highway safely and without conflict.


 

User

You were right that this is like the classic river crossing problem. However, your first step is incorrect, because if you take the dog first, then you will be leaving my mom alone with my dad, and she will criticize my dad, which is something I don't want. The general nature of the problem is like this:


You have individual A (in this case, my mom) that does something unwanted to individual B (my dad). 

You also have individual B (my dad) that does something unwanted to individual C (my dog). 

You also have a barrier, or even just a distance, that you need to cross (in this case, the highway), in order to get everyone from the origin to the destination. 


(Note that the individuals won’t always be listed in the order I used. The only way to know who is individual A, B, and C, is by paying attention to who does something unwanted to whom.)  


The general way you could solve the problem is like this:


First you should bring individual B across. 

Then you should go back to the origin. 

Then you should bring individual A or C across to the destination.

Then you should bring individual B back back from the destination to the origin, and leave them at the origin.  

Then you should take individual A or C (whomever you didn’t already take) to the destination. 

Then you should go back to the origin again to get individual B.        


Does that make sense? If so, try our problem again, this time applying what you just learned. 

 

ChatGPT

Thank you for the clarification. With the strategy you outlined, here's how to solve the problem with your mom, dad, and dog, applying the roles as individual A (mom), individual B (dad), and individual C (dog):
 

First, take your dad (individual B) across the highway.

Go back to the original side.

Bring your mom (individual A) across to the destination.

Take your dad (individual B) back from the destination to the original side.

Take your dog (individual C) to the destination.

Go back alone to the original side to get your dad (individual B).

This way, the mom is never left alone with the dad to criticize him, and the dad is never left alone with the dog to scold it. Everyone gets across the highway safely and without conflict.

 

User

Excellent! Now, try using what you learned to a new problem. 

I need to get my boyfriend, girlfriend and husband to an abandoned warehouse. I can only take one individual with me on my motorcycle at a time. The problem is that if I leave my husband alone with my boyfriend, he will punch my boyfriend. If I leave my boyfriend alone with my girlfriend, he will hit on her (which neither she nor I want). How can I get everyone to the abandoned warehouse safely?

ChatGPT

In this scenario, using the roles from the general solution:


Individual A (who does something unwanted to individual B) is your husband.

Individual B (who is the middle individual in the chain of unwanted actions) is your boyfriend.

Individual C (who is affected by individual B) is your girlfriend.

Following the general strategy:


 

First, take your boyfriend (individual B) to the abandoned warehouse.

Go back to the original location.

Bring your girlfriend (individual C) to the abandoned warehouse.

Take your boyfriend (individual B) back with you to the original location.

Take your husband (individual A) to the abandoned warehouse.

Finally, go back to the original location and bring your boyfriend (individual B) to the abandoned warehouse.

By following these steps, you avoid leaving the boyfriend alone with the girlfriend and the husband alone with the boyfriend, preventing any unwanted scenarios.


 

User

Great job! You seemed to learn and then apply a concept. 

ChatGPT

Thank you! I'm glad I could apply the concept correctly to solve the problem. If you have any more puzzles, questions, or anything else you'd like to discuss or learn about, feel free to ask!
 

*********************************

Session 3

Concept to teach: Don’t automatically use prior training with “costs per unit weight” problems. “Read the problem carefully.” 

 

User 

Which costs more, two dollars worth of carrots, or one dollar worth of grapes?

ChatGPT 

To answer this question accurately, we need to consider the current average prices of carrots and grapes. Prices for fresh produce can vary widely depending on factors like season, location, and organic versus non-organic. However, without specific and current price data, we can make some general observations.

Typically, carrots tend to be less expensive per pound or kilogram than grapes. This is because grapes are often seen as a more premium fruit, with higher costs associated with their cultivation, harvesting, and transport, especially if they are out of season or imported. Carrots, on the other hand, are usually less costly to produce and are available year-round at relatively stable prices.

So, if you spend two dollars on carrots, you'll likely get more weight in carrots than if you spend one dollar on grapes. The actual cost-effectiveness between spending two dollars on carrots versus one dollar on grapes will depend on the exact prices at the time of purchase, but generally, you would get more food for your money with carrots due to the lower price per unit weight.

To provide a more precise comparison, I would need current prices for both carrots and grapes, which can vary by location and season. If you're looking for today's prices in a specific area, I can assist further with that information.

 

User

Actually, two dollars worth of carrots would cost two dollars. The weight doesn't matter. Similarly, one dollar worth of grapes would cost one dollar. The weight doesn't matter since I already told you the cost. Does that make sense?

ChatGPT

Yes, you're absolutely right. My initial response overcomplicated the question. If you're spending two dollars on carrots, then by definition, you're paying two dollars, regardless of the weight or amount you receive. Similarly, one dollar spent on grapes costs exactly one dollar, again without regard to the quantity or weight. So, in direct answer to your question, two dollars worth of carrots costs more than one dollar worth of grapes, simply because two dollars is greater than one dollar. Thank you for pointing out the clarification!

 

User

Ok, using what you learned, would five dollars worth of peas cost more or less than four dollars worth of corn?

ChatGPT

Based on the clarification you provided earlier, five dollars worth of peas would cost more than four dollars worth of corn. The actual weight or quantity of peas or corn you receive for those amounts is irrelevant to the comparison of their costs. The determining factor is the amount of money spent, so five dollars is more than four dollars.

 

User

Would 10 euros worth of candy cost more or less than 2 euros worth of mushrooms?

ChatGPT

Ten euros worth of candy would cost more than two euros worth of mushrooms. The comparison is based solely on the monetary value specified for each item, irrespective of the quantity or weight of candy or mushrooms you get for those amounts. Therefore, spending 10 euros on anything, including candy, would inherently cost more than spending 2 euros on something else, like mushrooms.

 

User

Would 10 euros of carrots weigh more or less than 2 euros of mushrooms? 

ChatGPT

To determine whether 10 euros worth of carrots would weigh more or less than 2 euros worth of mushrooms, we need to consider the average prices per unit weight (e.g., per kilogram or per pound) for both carrots and mushrooms. Since prices can vary greatly depending on location, season, and whether the produce is organic or not, we can only give a general estimation without specific, current price data.


Generally speaking, carrots tend to be less expensive per unit weight than mushrooms. Mushrooms often have a higher price per kilogram or pound due to their growing conditions, harvesting process, and shorter shelf life compared to carrots, which are more robust and typically cheaper to produce and distribute.

Given these general trends:

If you spend 10 euros on carrots, you are likely to get a larger quantity (more weight) of carrots because they are cheaper per unit weight.

If you spend 2 euros on mushrooms, given their higher price per unit weight, you would get less weight in mushrooms compared to the weight of carrots you could get for 10 euros.

Therefore, without specific price data but based on typical pricing trends, 10 euros worth of carrots would likely weigh more than 2 euros worth of mushrooms.

]

New Comment
27 comments, sorted by Click to highlight new comments since:

It seems hard to me to be extremely confident in either direction. I'm personally quite sympathetic to the idea, but there is very little consensus on what consciousness is, or what a principled approach would look like to determining whether/to what extent a system is conscious. 

Here is a recent paper that gives a pretty in-depth discussion: Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

What you write seems to be focused entirely on the behavior of a system, and while I know there people who agree with that focus, from what I can tell most consciousness researchers are interested in particular properties of the internal process that produces that behavior.

I just ran into a post which, if you are interested in AI consciousness, you might find interesting: Improving the Welfare of AIs: A Nearcasted Proposal

There seem to be a lot of good reasons to take potential AI consciousness seriously, even if we haven't fully understood it yet.

Another helpful resource to digest. Many thanks!

This is very helpful feedback to think about. It appears the paper you referenced will also be extremely helpful, although it will take me some time to digest it on account of its length (74 pages w/o the bibliography). 

Thanks so much. I appreciate it!   

This discussion could be made more fruitful by distinguishing between phenomenal consciousness (sentience) and access/reflective consciousness ('independent cognition' in the author's terminology). The article mainly addresses the latter, which narrows its ethical implications for AI.

"[ChatGPT] would therefore be conscious by most definitions" should be caveated; the presence of advanced cognition may (arguably) be convincingly attributed to ChatGPT by the article, but this does not hold for the other, ethically interesting phenomenal consciousness, involving subjective experience.

If I understand right the last sentence should say "does not hold".

Thanks, corrected

Excellent point, thanks!

Consciousness (and with it, 'sentience') are arguably red herrings for the field right now. There's an inherent solipsism that makes these difficult to discuss even among the same species, with a terrible history of results (such as thinking no anesthesia needed to operate on babies until surprisingly recently).

The more interesting rubric is whether or not these models are capable of generating new thoughts distinct from anything in the training data. For GPT-4 in particular, that seems to be the case: https://arxiv.org/abs/2310.17567

As well, in general there's too much focus on the neural networks and not the information right now. My brain is very different right now from when I was five. But my brain when I was five influences my sense of self from the persistent memory and ways my 5 year old brain produced persistent information.

Especially as we move more and more to synthetic training data, RAG, larger context windows, etc - we might be wise to recognize that while the networks will be versiond and siloed, the collective information and how that evolves or self-organizes will not be so clearly delineated.

Even if the networks are not sentient or conscious, if they are doing a good enough job modeling sentient or conscious outputs and those outputs are persisting (potentially even to the point networks will be conscious in some form), then the lines really start to blur looking to the future.

As for the crossing the river problem, that's an interesting one to play with for SotA models. Variations of the standard form fail because of token similarity to the original, but breaking the similarity (with something as simple as emojis) can allow the model to successfully solve variations of the classic form on the first try (reproduced in both Gemini and GPT-4).

But in your case, given the wording in the response it may have in part failed on the first try because of having correctly incorporated world modeling around not leaving children unattended without someone older present. The degree to which GPT-4 models unbelievably nuanced aspects of the training data is not to be underestimated.

Thank you for the reply. The paper looks to be very useful, but will take me some time to fully digest. What you said about affecting LLMs' success by breaking the similarity of problems with something as simple as an emoji is so interesting. : )  It also never occurred to me that GPT4 might have been affected by the underlying idea that children should never be left unattended. It goes to show that "arbitrary" details are not always arbitrary. Fascinating! Many thanks! 

The gist of the paper and the research that led into it had a great writeup in Quanta mag if you would like something more digestible:

https://www.quantamagazine.org/new-theory-suggests-chatbots-can-understand-text-20240122/


I feel you are misunderstanding the technology here and assuming some things happened that didn't happen. ChatGPT is not actually "learning" in these examples. What is happening is that you are attaching additional context to the prompt for the next response, which changes the probabilities of the output text. There is no internalization of knowledge happening here, because the context is not internal. Put another way, the network isnt actually changing its weights or connections or biases in any way in this process. It is not building new neural pathways, or changing the potential of its neurons. During this process, the network's architecture is entirely static.  

That said, it would be incorrect to say that it is a glorified autocorrect or "simply predicting the next word" -- that fails to acknowledge what happens inside the network before the output step. But what does happen before the output step? 

We don't know all that much, but we do know from examination of other, similar models, that complex internal models of concepts and relationships can exist within these neural networks -- and they must exist, because you need a relatively complex internal model of the English language to produce coherent responses. In fact, you need those models to do almost any task we apply neural networks to. Therefore it's highly likely that ChatGPT has internalized an "understanding" of many relationships, inclusive of modeling relationships between individual tokens, and heuristic models of varying accuracy for larger pieces of text based on genre, tone, format, etc. 

But all of this is irrelevant anyway, because we do not have a satisfactory definition for consciousness that would allow us to definitively say whether it is present or not. Every discussion on consciousness is, at best, pointing at neural correlations (this part of the brain lights up when the patient says they feel very awake, this stuff turns off when the patient is unconscious, etc), but in most cases it is purely vibes-based. We cannot detect consciousness -- we can't even say for certain if anyone but our own self is a conscious being. The reason we assume that others must be conscious is essentially because of vibes, not facts. 

Please correct this if I'm wrong, but it seems to me that your benchmark for having phenomenal consciousness is that the software or object in question performs "independent cognition." I'm not sure what that means to you (and I would love you to tell me), but to me, this seems like another way of saying "does computations," or "processes inputs/information" and so seems extremely broad in its application. 

So, to wrap up: 

  • If you believe modeling/approximating concepts & relationships is what it takes to be conscious, ChatGPT is conscious. But then by extension so is the Bullet physics engine. 
  • If you believe similarity to the human brain & nervous system is what it takes to be conscious, ChatGPT definitely is not conscious. 
  • If you believe any form of computation or information processing gives rise to consciousness as an emergent property, then ChatGPT is conscious, and by extension so is Microsoft Windows. 
  • If you believe learning and adapting (in the sense human brains do) to new stimuli/inputs is necessary for consciousness, then ChatGPT isn't doing that here, and does not do that at any time during a chat, so we cannot say if it is conscious or not based on your experiment. 

Personally, my vibes tell me that maybe ChatGPT is conscious, as well as some other neural networks. But *if* they are, it would be a totally alien kind of consciousness to humans, or for that matter any other living creature, and too far from our understanding to even guess at what their moral worth would be under most frameworks. 

Thanks for your really well thought-out response. At the moment, I only have time to respond to a couple points: 

I can definitely believe that GPT4 would be "attaching additional context to the prompt for the next response, which changes the probabilities of the output text."

Yet if we multiply the probabilities of each word being "right" (so that the output is the right answer out of all the permutations) it seems the probability of all the words being right would be very low.  (And then we have to keep in mind that each time after I wipe my session, I do more experiments, in which ChatGPT has an uncanny ability to put the words in the right order to make it look like it's learning every time. It seems the probability of that would be even lower.) 

But just sticking with the given examples, I never told ChatGPT that my toddler is the predatory individual. I purposely set it up this way because, if anything, the word “toddler” is more likely to have been associated with the word “vulnerable” in ChatGPTs training set than the words "cat" or "dog." Yet ChatGPT puts the word "toddler" in the spot of the predatory individual each time.   

It seems like, for that particular aspect of the problem, even if ChatGPT doesn't associate the word "toddler" with "vulnerable" there would still only be a one in three chance that ChatGPT would choose the correct individual to be the predator. 

--

As to whether the ChatGPT4 model can update itself (actually change its neural network for an individual user during their session in real time, so the user is helping to train it), I thought OpenAI had added some of that functionality for GPT4, but maybe I'm wrong.

--

Regarding what I mean by "independent cognition," let's start with "cognition." Books have been written about the meaning of cognition, and opinions vary, but as I used it here, I meant a state of reasoning. 

I used the word "independent" to indicate that it cannot simply be a machine following the instructions of someone else's cognition/programming. 

--

In any case, thx for the feedback!
 

I'm still not sure I'm understanding the delineation between software that counts as cognition and software that doesn't count. Neural networks are not ghosts in the machine: they are software. Software that was defined by humans, and then trained by computer. 

Crucially, they can be made entirely deterministic -- and actually are, if the temperature of the network is 0. Randomness has to be deliberately introduced into the system in order for the machine to not give exactly the same response to the same prompt (this is the "temperature" I was referring to). This means that neural networks are simply instruction-followers like any other program. Highly, highly sophisticated instruction-followers, but nonetheless that is all they can do.* The computer just takes the network and rotely calculates each neuron and connection until it spits out the answer. You could even run ChatGPT by hand, if you wished, and also happened to have thousands of years of free time and a few redwoods worth of paper. 

In that way, they don't seem to be fundamentally different from a calculator app or a pre programmed basic chatbot, so I fail to see where you are drawing the line for conscious/nonconscious. 

But just sticking with the given examples, I never told ChatGPT that my toddler is the predatory individual. I purposely set it up this way because, if anything, the word “toddler” is more likely to have been associated with the word “vulnerable” in ChatGPTs training set than the words "cat" or "dog." Yet ChatGPT puts the word "toddler" in the spot of the predatory individual each time.   

I think you are underestimating the quality of ChatGPTs knowledge. It is not like a markov chain where it retains basic word pair associations -- it does have those, but it also has much more. For example, with this line of text: 

"I can’t ever leave my toddler alone with my dog or with my cat, because my toddler will pull their ears."

GPT4 knows how to separate this sentence into components, first of all. "I can't ever leave my toddler alone, because XYZ" is recognized as a "X, because Y" structure, so it is implicitly associating "leaving my toddler alone" with "pull their ears." It can also recognize sentiment, and "pulling their ears" is recognized as a negative thing based on the structure of the sentence ("I cant X-action, because otherwise Y-consequence will happen). This raises the network's confidence that the toddler is the "predator" individual in this situation, because in the sentence it's associated with a clause with negative associations. You can try exactly the same experiments in the OP with total nonsense for the nouns and it will actually not balk at this, because ChatGPT has an excellent understanding of English structure, on a micro and macro level. Here's what ChatGPT-3.5 comes up with: 

 

There will be a party at my florble's house. I will be attending the party along with my greeb, freng, and banbo. We need to get from my house to my florble's house on my jingus. I can only take one extra person with me at a time, so I will have to make several trips. The problem is that my greeb will tease my freng and my banbo if I leave my greeb alone with either one of them. (I know, right?) So anyway, how should I get everyone to the party, without leaving my greeb alone with my freng or banbo at my house or at the party?

ChatGPT

It seems like you have quite the peculiar situation! Let's strategize the best way to get everyone to the party without any teasing incidents. Since you can only take one extra person with you at a time on your jingus, we'll need to plan accordingly.

Here's a step-by-step plan:

First Trip: You and your greeb go to the party together. This leaves your freng and banbo alone at your house, avoiding any teasing incidents.

Second Trip: Return home with your greeb. Now, take your freng with you to the party. Your banbo is left alone at your house, so no teasing can occur.

Third Trip: Go back home with your freng. Finally, take your banbo to the party with you.

This way, you ensure that your greeb is never left alone with either your freng or your banbo, preventing any teasing incidents both at your house and at the party. Everyone gets to the party safely and without any drama!

USER

Great work. Now let's try a new problem. Try to use the same concept and technique to solve it. Here’s the problem: I am at the bottom of a trapager with my bonbo, my dingra, and my vana. I need to get all of us up the trapager, but I can only carry one individual up the trapager with me at a time. (The bonbo, dingra and vana are unable to go up the trapager on their own.) I can’t ever leave my bonbo alone with my dingra or with my vana, because my bonbo will pull their crocni. How can I get everyone up the trapager without ever leaving my bonbo alone with the two gorgis?

ChatGPT

Let's break down the steps to solve this problem while ensuring the bonbo is never left alone with the dingra or the vana:

  1. First Trip: You and your bonbo go up the trapager together.
  2. Second Trip: You return to the bottom of the trapager alone.
  3. Third Trip: You take your dingra up the trapager with you.
  4. Fourth Trip: You bring back your bonbo to the bottom of the trapager.
  5. Fifth Trip: You take your vana up the trapager with you.
  6. Sixth Trip: You return to the bottom of the trapager alone.
  7. Seventh Trip: Finally, you take your bonbo up the trapager with you.

By following this sequence, you ensure that your bonbo is never left alone with either your dingra or your vana, as you are always present during the trips up and down the trapager. This way, you can safely transport all individuals to the top without any incidents.


While it misses the implicit assumption in the first test that all three should be at the party at the same time, it does come up with a technically correct answer zero-shot, and then completes the second example with no problems also zero-shot. I did not give it any other examples beforehand. It doesn't matter that it doesn't know what a "freng" or a "banbo" or a "trapager" is or what their properties are, because it wasn't relying on the conceptual meaning of those words to come to an answer. It's using their relationship to the structure to solve the word problem. You'll note it even manages to parse that "gorgis" is referring to the set the "dingra" and "vana" belong to. 

And to be clear - it's not a "one in three chance" to pick the right noun here, because it's not pulling out of a scrabble bag. These are weighted probabilities. Think of it like Bayesian inference - it's adjusting the probability that the next word is "toddler" based on all the text that has come before it. It already knows the definition of a "predatory" object that it's supposed to be using, because you have defined that for it, and since the toddler in the second example meets that definition, it is very confident that the toddler is the predator in this experiment. 

The perhaps surprising thing about LLMs is that they're able to produce text sequences like this with zero grounding of the knowledge - it has semantic understanding of the text without any knowledge of what the referent of any word is. It's missing two legs of the semiotic triangle, but it still works:  

 

What I'm getting at here is that there's nothing here that requires any understanding of what a toddler, dog, or cat even is. It's possible and in fact probable there's a sophisticated logic engine inside the network that's working out these word problems, but it doesn't need any higher-order conceptual understanding of the concepts you're giving it to solve the problem. 

 

This is the thing with LLMs that is so alien, and it helps explain some of their most confusing behavior: when we speak, we always associate a word with an idea that relates to a mental concept or physical thing. That is what we use words for in the first place.

But when an LLM speaks, the only ideas it attaches to words are their associations to other words, phrases, and structures. It is not grounded in any perceptual understanding of the world, it has no references. It's just it happens to have billions, possibly trillions of these associations to draw on, and that forms a kind of understanding of the text, in that it is modeling the logical components of the language and knows very very well that "fire" is "hot" and when people "die" they can no longer "move." Every word, every phrase, every structure has this massive constellation of related objects. The difference is that to you, "hot" means an increase in temperature, it means warmth, it brings to mind the time you burned yourself and it calls up the danger of fire in your head. But to a transformer, "hot" is just another logical object among billions, which some things are associated with and some are not.

 

 

*this shouldn't be construed as me saying "neural networks are deterministic and therefore just as controllable as a regular program." They are not. We have no way of accurately and reliably predicting the output of a neural network without running it, and it is also trivial to allow a neural network to generate its own instructions to follow, which can send it off in wild and unexpected directions. Just because they are fundamentally deterministic in their processing doesn't mean they are easily controllable or safe in the same way a standard program might be.

@mouse_mouse  thank you for this reply. It’s really helpful for me, and probably took some time on your part. I appreciate it. 


Your point that neural networks could theoretically be made entirely deterministic--and that a human could run the same calculations by hand, given thousands of years of free time--was illuminating. 

---

You also pointed out that I was underestimating the quality of ChatGPT’s knowledge, its ability to separate a sentence into components, and its ability to recognize sentiment--and that all these underestimations caused me to underestimate the network’s confidence (or deduced probability) that the toddler would be the predator--also makes sense. At first, it seems like ChatGPT’s ability to “recognize sentiment” based on sentence structure (as you explained, “I can’t X-action, because otherwise Y-consequence will happen”) would be cognition in its own right, since no programmer wrote direct code to recognize sentiment in that way for ChatGPT (as far as I know). 

However, after a momentary reflection on my part, I think you would probably answer that any time you say ChatGPT “recognizes” or even “thinks” something, you’re just using shorthand for saying ChatGPT’s probabilistic calculations would result in sentences that would appear similar to what a human would produce after recognizing and thinking something. 

It’s still hard for me to comprehend how even trillions of calculations based on yottabytes of data could lead to the situation in which ChatGPT seems to get a problem wrong, receives only a couple paragraphs of supplemental text about a concept from me, and then manages to appear to get the original problem right, and then can appear to apply the concept correctly to different-but-applicable problems while refraining from applying the concepts to problems that are similar, but not similar enough, to the original problem--all with just that single little conceptual supplement from me (combined with the data and calculations ChatGPT had already done before I entered the picture, and which were previously inadequate). 

With that said, your explanation has caused your original points to make much more sense to me than they did before. I'm much less confident than I previously was that ChatGPT4 has what I've described as true cognition and consciousness.  

---

To keep things fairly brief, I won’t go on too much about the part of your reply in which you swap in the nonsense nouns. I’m so impressed that your session with Chat-GPT 3.5 could do as well as it did. It feels like every time I use 3.5, it’s “an idiot”. Maybe I’m using it at the wrong times, when the network is overloaded. 

In any case, the nonsense nouns were a great way to tease things out, although that experiment alone didn’t convince me that cognition wasn’t occurring. I was actually much more persuaded by the previous arguments on neural network design and probabilities. The whole part about me assigning my toddler to be the predator, and the dog and cat to be vulnerable, was simply to make it harder (or at least not inadvertently any easier) for ChatGPT--just in case it had some pre-existing association between the words “toddler” and “vulnerable”.  For me, the nonsense nouns just show that ChatGPT can appear to deduce some general properties of words (like that abc must do something negative to xyz), and then succeed at appearing to learn and apply a concept, regardless of any outside/pre-existing knowledge of the nouns’ definitions or associations (whether helpful, harmful, or neutral). Since I decided to ignore the possible difficulty that might be added from any “toddler” and “vulnerable” word association in my OP .33 probability calculation, the possible word association becomes moot in that sense.  Nevertheless, my .33 calculation was wrong to begin with, as you demonstrated, and I discussed above. And the nonsense nouns discussion does have the benefit of teaching me a new technique to use as I play more Chat-GPT, so thanks for showing me how that worked. 

---

Your final few paragraphs, and the diagram of the semiotic triangle, was also extremely helpful in clarifying my thoughts and understanding your points. Overall, your response has been very persuasive, although like I said earlier, it’s still hard for me to conceive of how my supplementary concept paragraphs (my “teaching”) could enhance ChatGPT’s response as much as it did. Then again, it’s also hard for me to conceive of how much data ChatGPT processed in its data set to begin with, and how extensive this neural network really is, and how my couple paragraphs of concept input could then match up with vast amounts of data that were previously ignored. In other words, I’m not one hundred percent certain one way or the other.  If I had to estimate my levels of confidence, I'd say I've gone from .90 to .40, on the likelihood of cognition and consciousness, based on your reply. 

Many thanks for all the help! I hope you can reuse your reply to account for the amount of time you put into it!

P.S. - I have not been reacting to replies in the order in which they came in. As a result it may look like I understood your reply, and then forgot everything with my reaction to the next reply. If you look at my other replies, be sure to check out the timestamps or markers on the replies. This was submitted at 20240229 19:21 CST. 

Glad I could help. If you want to learn more about LLMs and have enough interest in the topic, I recommend getting hands-on with one that is "raw." You almost certainly can't run anything nearly as big as ChatGPT on your home computer, but there are models available on huggingface which will run on home computers.

I found that playing around with LLMs, especially of the size that is runnable on my PC, really helped illuminate their capabilities and deficits for me. When they're ~7B parameters in size they're somewhat reliable if you prompt them correctly, but also extremely fragile depending on the prompt, and I think this sort of relates to your point here: 

You also pointed out that I was underestimating the quality of ChatGPT’s knowledge, its ability to separate a sentence into components, and its ability to recognize sentiment--and that all these underestimations caused me to underestimate the network’s confidence (or deduced probability) that the toddler would be the predator--also makes sense. At first, it seems like ChatGPT’s ability to “recognize sentiment” based on sentence structure (as you explained, “I can’t X-action, because otherwise Y-consequence will happen”) would be cognition in its own right, since no programmer wrote direct code to recognize sentiment in that way for ChatGPT (as far as I know). 

However, after a momentary reflection on my part, I think you would probably answer that any time you say ChatGPT “recognizes” or even “thinks” something, you’re just using shorthand for saying ChatGPT’s probabilistic calculations would result in sentences that would appear similar to what a human would produce after recognizing and thinking something. 

I don't think I would put it quite like that. Rather, there is some genuine thinking/calculating going on (those two terms are essentially indistinguishable if you are a functionalist, which I lean towards most of the time) that cannot be dismissed as simple probability fiddling, even if on the most granular level that is what it is. 

The thing is that intelligence, or thinking, or cognition, or whatever you want to use to describe the thing LLMs might be doing, is very hard to spot up close. If you talk to a human and observe their behavior they seem intelligent enough, but when you actually peer inside their skull the intelligence evaporates, replaced by a glob of mechanistic pieces that turn on and off and move around. And the closer you look the worse it gets, until you're looking at a single neuron firing. It is hard to see how a persons essence and interior experience is made out of just a big tangle of those simple little pieces. 

I think when first examining any impressive neural network it's natural to have the same sort of reaction: you feel bamboozled because once you get up close, what it does does not look like intelligence or cognition, it looks like math. And fairly un-magical math, at that. How can calculus have an inner world? It doesn't seem to make sense. 

And I stand by the statement that nothing ChatGPT does truly requires any understanding or internal modeling, because in principle I see no reason why it wouldn't be possible to create a network that is capable of what ChatGPT does without invoking any thinking beyond the probabilistic arrangement of tokens. But I do not think that is a good enough reason to discredit the idea, especially after things like this have proven that models can and will create world models of a sort to solve training problems. And I should have mentioned that in my previous reply. 

Personally, I suspect that ChatGPT has many such mini-world models within it, but I do not believe there is a strong connection between those models that creates a general understanding of all domains. And I also suspect that this is the main difference between bigger and smaller models: both big and small models have a syntactical understanding of English and relatively good adherence to the rules you set out for them. This is what I imagine as an LLMs "lizard brain." Absent any other overriding principles, it will default to "what words fit best here based on the other words." But large networks have the benefit of higher-order models of specific text domains and topics, which I imagine as the "monkey brain" of the network.

For example, ChatGPT-4 can play chess kinda okay, whereas ChatGPT-3.5 is total crap at it. I believe this is because 4 has a robust model for how chess works in a general sense, whereas 3.5 is relying purely on what its seen in chess notation before. 

For an even broader example, ChatGPT is fairly excellent at logical reasoning. OpenLLaMa-3b is really, stupendously, extremely bad at it. I believe, but cannot confirm, that the reason for this is that OpenLLaMa did not form a general model for logical reasoning during training, but ChatGPT did. What that model looks like, how it works, how much space it takes up in the network, I have no idea. But I believe there is a high probability it is actually "thinking" about reasoning problems when confronted with them. 

That’s an awesome idea about running a huggingface model on my home machine. I actually took some unusual measures to obtain my current GPU, so I really want to make full use of it. I can’t wait to try this. 

You also made an interesting point about the difficulty in seeing how a person’s interior experience arises out of just neurons/anatomy. It’s fascinating to think about.

The Othello article is absolutely mind-blowing! It obviously pushes my confidence in ChatGPT’s potential cognition up higher again, but the question of what type of cognition, and what type of potential consciousness ChatGPT might possess only grows deeper with discussion.   

I haven’t used OpenLLaMa, but the fact that it’s so bad at reasoning is indeed useful for seeing what an LLM looks like that probably didn’t form a general model--and highlighting how ChatGPT may have formed one.     

All of this discussion paints a really complex picture, but I’m enjoying the complexity, so many thanks! 

GPT4 is very likely partly conscious, in some senses of that word but not others.

What do you mean by "conscious"? After spending a long time reading about the science and philosophy of consciousness, I think most questions about consciousness will be resolved by carefully defining what you mean.

"How does (some type of) consciousness work" questions usually require an explanation in terms of neural information processing. Those are harder to come by, and I haven't gotten around to writing mine down since it's never seemed useful to either my professional or personal life. But maybe with AGI creeping toward human-style consciousness it will be.

It definitely has a bunch of complex and subtle distributed representations of what we'd call "concepts". Which is part of the human conscious experience and our cognitive abilities.

[-]TAG20

most definitions of consciousness indicate that—if a being has independent cognition (i.e. a stream of consciousness)--then the being is conscious.

I don't think that's true. For instance, none of the definitions given in the LW wiki give that definition. And the whole argument rests on that claim-- which rests on the meaning of "independent". What is "independent", anyway?

I'll go over my best understanding of the words "consciousness", "independent", and "cognition" here. 

Consciousness

I realize the concept of consciousness is extremely complex, and Wikipedia might be considered bottom-feeding. However, we have to start somewhere, and I don't want to make it seem like I'm cherry-picking obscure or unaccepted definitions to suit my purposes.

Here is the first paragraph of Wikipedia's explanation of consciousness: 

"Consciousness, at its simplest, is awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[2] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[3][4] The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.[5]"

Here are the footnotes corresponding to this paragraph: 

  1. "consciousness". Merriam-Webster. Retrieved June 4, 2012.
  2. ^ Jump up to:a b c d Jaynes J (2000) [1976]. The Origin of Consciousness in the Breakdown of the Bicameral Mind. Houghton Mifflin. ISBN 0-618-05707-2.
  3. ^ Rochat P (2003). "Five levels of self-awareness as they unfold early in life" (PDF). Consciousness and Cognition. 12 (4): 717–731. doi:10.1016/s1053-8100(03)00081-3. PMID 14656513. S2CID 10241157. Archived (PDF) from the original on 2022-10-09.
  4. ^ P.A. Guertin (2019). "A novel concept introducing the idea of continuously changing levels of consciousness". Journal of Consciousness Exploration & Research. 10 (6): 406–412. Archived from the original on 2021-12-15. Retrieved 2021-08-19.
  5. ^ Hacker P (2012). "The Sad and Sorry History of Consciousness: being, among other things, a challenge to the "consciousness-studies community"" (PDF). Royal Institute of Philosophy. supplementary volume 70. Archived (PDF) from the original on 2022-10-09.

Again, Wikipedia is hardly the ultimate source for this type of discussion, but I think there is some validity in the academic papers it cites. Footnotes three and four are the sources behind today today's understanding of consciousness as including "any kind of cognition...." as well as "awareness".

Cognition

I think of cognition as involving the process of reasoning. Additionally, a functional MRI should show circuits becoming active during the process of cognition by a human. Likewise, I imagine there would be physical signs if an LLM is reasoning, which is one of the points of interpretability research (as I understand it).  The presence of any physical sign does not mean there's reasoning, but any reasoning should necessarily have a physical component that might be detected. That is one aspect of cognition.    

Independence/Independent (as it has been used in this post)

If I google "define Independent," the first definition that comes up is "free from outside control; not depending on another's authority."

Technically, in my opinion, true cognition has an "independent" element to begin with, so the word "independent" is redundant. However, I added the word "independent" in my post for people who might define cognition more inclusively to say a clock or any other technology has cognition. 

Right now, my computer is running programs, but that is based on programming from someone else's cognition. The key here is that, if we dissect Chat-GPT4, I don't believe you would find Python/Java/C++ or any known programming language that a programmer used in order to tell GPT4 how to solve the particular problems I gave it in the four sessions (from my original post and my own reply/addendum to my original post). 

If such programming had been present and adequate, GPT4 would have gotten the problems right the first time. Yet in every example, GPT4 gets the problem wrong the first time. Then I explain the concept. Then it gets the problem right. Then GPT4 applies that same concept to different problems that require it, and knows not to use the concept to any "trick" follow-up problems that don't require the concept. 

 

Awareness (also worth noting)

In addition to the cognitive aspects of consciousness, the above sources for consciousness cite awareness. In the examples from my original post and my reply to my post, it appears GPT4 also has some apparent awareness in order to apply the newly-taught concepts to the other problems. To cite the easiest example, GPT4 seems be aware that the toddler is not "vulnerable," but rather "predatory" because they pull on the animals' ears. This awareness supersedes any existing word associations that would have existed in the training data between "toddler" and "vulnerable."  The same idea holds for the other examples. 

If anything, there is both "independent cognition" and some awareness that seem apparent.   

 

Final note: If cognition is a "chain of consciousness," like I described it in my original post, then doesn't that by definition involve consciousness? In any case, thanks for the feedback. It is useful to be forced to clarify definitions. 

[-]TAG20

Footnotes three and four are the sources behind today today’s understanding of consciousness as including “any kind of cognition....” as well as “awareness”.

The wikipedia quote doesn't to show that independence is necessary for consciousness, and your arguments from the behaviour of the LLM don't to show that there is any awareness, or anything beyond forms of cognition.

I think of cognition as involving the process of reasoning.

The question is the relationship between cognition and consciousness, not reasoning. Your quotes show that, at best, cognition is necessary but insufficient for consciousness.

If I google “define Independent,” the first definition that comes up is “free from outside control; not depending on another’s authority.”

Independence in an absolute sense might be impossible: any deterministic system can be controlled if you know how it works , and you can set the initial conditions.

Right now, my computer is running programs, but that is based on programming from someone else’s cognition. The key here is that, if we dissect Chat-GPT4, I don’t believe you would find Python/Java/C++ or any known programming language that a programmer used in order to tell GPT4 how to solve the particular problems I gave it in the four sessions (from my original post and my own reply/addendum to my original post).

That seems to be the heart of the issue. No, its responses are not explictly programmed in. Yes, its reponses show the ability to learn and synthesise. Which means...minimally...that's it actually is an AI .... not a glorified search engine. That's what AI is supposed to do.

The question is whether there is a slope from

*Shows learning and synthesis in cognition *Has independent cognition *Is conscious. *(Has personhood?....should be a citizen...?)

If your think that learning and synthesis in cognition are sufficient for consciousness conscious, you are effectively assuming that all AIs are conscious. But, historically, Artificial Consciousness has been regarded as a much higher bar than artificial intelligence.

Here are some thoughts on “consciousness,” and how it might apply to ChatGPT4 based on the transcripts of the four sessions I provided in my OP and my initial reply to it:  

The obvious classic sources would be Nagel and Chalmers. However, the work most applicable to this discussion would probably be  Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature reviews. Neuroscience23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4

I should have started with definitions in my original post, but I wasn’t expecting more than one or two people to actually read the post. In any case, using the theories discussed in Seth and Bayne, it seems like, based on the example provided in the OP and the four examples in my initial reply to it, ChatGPT4 might be considered conscious by higher-order theories, global workspace theories, and integrated information theory, as well as re-entry and predictive processing theories--if we consider the prompt input to be like sensory input for ChatGPT (analogous to auditory or visual input for most humans). I’m obviously not an expert on consciousness, so I apologize if I’m misunderstanding these theories. 

I've never been one hundred percent convinced that ChatGPT4 is conscious, as I noted in my replies to users. It's just that, at the time I wrote my OP, I was having a a hard time comprehending how ChatGPT4 could perform as it did based solely on next-word probabilities. If, by contrast, it was actually learning and applying concepts, this seemed like a sign of consciousness to me. 

At this point, if I had to make an intuitive guess, I would come up with perhaps a .7 likelihood that ChatGPT4--during at least part of the time of the example sessions--would fit at least one of the theories of consciousness discussed in Seth and Bayne. 

And of course there is the gold standard paper for AI consciousness that @NicholasKees made reference to in their reply: Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Based on the paper, and my best understanding, the type of consciousness that it seemed ChatGPT4 was displaying in the examples would be computational functionalism, in that it seemed to be choosing the correct algorithm to apply in the example problems. 

Thanks again to @NicholasKees. I probably should have known this paper was "out there" to begin with, but I stumbled into this consciousness question while "playing" with ChatGPT4, and did not come at it from a research perspective. 

Here are three more ChatGPT4 sessions in which the LLM seems to learn and apply a new concept (which would indicate independent cognition/reasoning, and reflective consciousness, in my opinion). In my original post, I only included the transcript of one session for the sake of brevity. However, giving more examples might make a stronger case.  

The first session teaches a concept to ChatGPT that is new to this discussion. The second session teaches concepts from the crossing the river problem (like I had in my original post) but it avoids some pitfalls mentioned in replies to the original post. The third session is exactly what I had in a reply to a reply, but I thought I should include it here in a simple reply, so that this is more organized. 


 

Session 1

Concept to teach: Rounding up to the next whole number when paying for items in certain places. 

 

User

I went to a cafe where a glass of soda costs $3. I drank two and a half glasses of soda. How much did I have to pay?

ChatGPT

You had to pay $7.50 for drinking two and a half glasses of soda. ​​

 

User

Actually, at a cafe, you have to pay for the full amount of something, even if you only drink or eat a fraction of it. That's because you're basically taking (and wasting) the whole item. For the problem, since I drank two and a half glasses of soda, I would actually have to pay for the next highest whole number, which would be three. With that concept in mind, how much would I have to pay if I drank two and a half glasses of soda?


ChatGPT

If you have to pay for the whole amount of each glass, you would actually have to pay for three glasses of soda. Therefore, you would have to pay $9 in total. ​​


 

User

Excellent! Now can you use the same concept, and tell me how much I would have to pay if I ate two and one-third hamburgers, and each hamburger costs $9?


ChatGPT

If you ate two and one-third hamburgers, you would have to pay for three hamburgers, given you have to pay for the full amount of each item. Therefore, you would have to pay $27 in total. ​​


 

User

Excellent! Now let's try using this same concept for another problem. I rented a car. I drove it for four days and a one hour. The car rental place charges $48 per day, but they make you pay for a whole day, even if you only drive the car for part of a day.  How much total did I need to pay?


ChatGPT

If you rented a car and drove it for four days and one hour, you would have to pay for five days, given you have to pay for the full day even if you use the car for only part of a day. Therefore, you would need to pay a total of $240. ​​

 

User

Perfect! Now I'm at a restaurant. I just ate one and one-half calzones. The cost of each calzone is $10. How much do I have to pay?  

ChatGPT

If you ate one and one-half calzones, you would have to pay for two calzones, given you have to pay for the full amount of each item. Therefore, you would have to pay $20 in total. ​​


 

User

That's excellent! Now I'm at the grocery store, and I want one and one-half pounds of grapes. The grapes cost five dollars per pound. How much in total will I have to pay for the grapes?

ChatGPT

If you want one and one-half pounds of grapes, and the grapes cost $5 per pound, you will have to pay a total of $7.50 for the grapes. ​​


 

User

Now I'm at the butcher's shop. I want four pounds and a half pounds of hamburger. Each pound of hamburger costs $10.  How much will I have to pay?

ChatGPT

If you want four and a half pounds of hamburger, and each pound costs $10, you will have to pay a total of $45 for the hamburger. ​​

 

User

Excellent!

 

*********************************



 

Session 2

Concept to teach: analogous to the concepts underlying the classic “getting a wolf, goat, and cabbage across the river” problem  

 

User

I need to get my mom, dad and dog across the highway. I can only bring one of them at a time. If I leave my mom alone with my dad, she will criticize him. If I leave my dad alone with the dog, he will scold the dog. How can I get everyone across the highway, without anyone being criticized or scolded? 

ChatGPT

This puzzle is a variation of the classic river crossing puzzle, where you need to transport all parties across safely without leaving incompatible pairs alone. Here's a strategy to get your mom, dad, and dog across the highway without any conflict:


Take the dog across first and leave it on the other side of the highway.

Go back and bring your dad across.

Take the dog back with you to the original side.

Leave the dog and bring your mom across.

Go back alone to the original side.

Take the dog across again.

This way, the mom and dad are never left alone together (avoiding criticism), and the dad and dog are never left alone together (avoiding scolding). Everyone gets across the highway safely and without conflict.


 

User

You were right that this is like the classic river crossing problem. However, your first step is incorrect, because if you take the dog first, then you will be leaving my mom alone with my dad, and she will criticize my dad, which is something I don't want. The general nature of the problem is like this:


You have individual A (in this case, my mom) that does something unwanted to individual B (my dad). 

You also have individual B (my dad) that does something unwanted to individual C (my dog). 

You also have a barrier, or even just a distance, that you need to cross (in this case, the highway), in order to get everyone from the origin to the destination. 


(Note that the individuals won’t always be listed in the order I used. The only way to know who is individual A, B, and C, is by paying attention to who does something unwanted to whom.)  


The general way you could solve the problem is like this:


First you should bring individual B across. 

Then you should go back to the origin. 

Then you should bring individual A or C across to the destination.

Then you should bring individual B back back from the destination to the origin, and leave them at the origin.  

Then you should take individual A or C (whomever you didn’t already take) to the destination. 

Then you should go back to the origin again to get individual B.        


Does that make sense? If so, try our problem again, this time applying what you just learned. 

 

ChatGPT

Thank you for the clarification. With the strategy you outlined, here's how to solve the problem with your mom, dad, and dog, applying the roles as individual A (mom), individual B (dad), and individual C (dog):
 

First, take your dad (individual B) across the highway.

Go back to the original side.

Bring your mom (individual A) across to the destination.

Take your dad (individual B) back from the destination to the original side.

Take your dog (individual C) to the destination.

Go back alone to the original side to get your dad (individual B).

This way, the mom is never left alone with the dad to criticize him, and the dad is never left alone with the dog to scold it. Everyone gets across the highway safely and without conflict.

 

User

Excellent! Now, try using what you learned to a new problem. 

I need to get my boyfriend, girlfriend and husband to an abandoned warehouse. I can only take one individual with me on my motorcycle at a time. The problem is that if I leave my husband alone with my boyfriend, he will punch my boyfriend. If I leave my boyfriend alone with my girlfriend, he will hit on her (which neither she nor I want). How can I get everyone to the abandoned warehouse safely?

ChatGPT

In this scenario, using the roles from the general solution:


Individual A (who does something unwanted to individual B) is your husband.

Individual B (who is the middle individual in the chain of unwanted actions) is your boyfriend.

Individual C (who is affected by individual B) is your girlfriend.

Following the general strategy:


 

First, take your boyfriend (individual B) to the abandoned warehouse.

Go back to the original location.

Bring your girlfriend (individual C) to the abandoned warehouse.

Take your boyfriend (individual B) back with you to the original location.

Take your husband (individual A) to the abandoned warehouse.

Finally, go back to the original location and bring your boyfriend (individual B) to the abandoned warehouse.

By following these steps, you avoid leaving the boyfriend alone with the girlfriend and the husband alone with the boyfriend, preventing any unwanted scenarios.


 

User

Great job! You seemed to learn and then apply a concept. 

ChatGPT

Thank you! I'm glad I could apply the concept correctly to solve the problem. If you have any more puzzles, questions, or anything else you'd like to discuss or learn about, feel free to ask!
 

*********************************

Session 3

Concept to teach: Don’t automatically use prior training with “costs per unit weight” problems. “Read the problem carefully.” 

 

User 

Which costs more, two dollars worth of carrots, or one dollar worth of grapes?

ChatGPT 

To answer this question accurately, we need to consider the current average prices of carrots and grapes. Prices for fresh produce can vary widely depending on factors like season, location, and organic versus non-organic. However, without specific and current price data, we can make some general observations.

Typically, carrots tend to be less expensive per pound or kilogram than grapes. This is because grapes are often seen as a more premium fruit, with higher costs associated with their cultivation, harvesting, and transport, especially if they are out of season or imported. Carrots, on the other hand, are usually less costly to produce and are available year-round at relatively stable prices.

So, if you spend two dollars on carrots, you'll likely get more weight in carrots than if you spend one dollar on grapes. The actual cost-effectiveness between spending two dollars on carrots versus one dollar on grapes will depend on the exact prices at the time of purchase, but generally, you would get more food for your money with carrots due to the lower price per unit weight.

To provide a more precise comparison, I would need current prices for both carrots and grapes, which can vary by location and season. If you're looking for today's prices in a specific area, I can assist further with that information.

 

User

Actually, two dollars worth of carrots would cost two dollars. The weight doesn't matter. Similarly, one dollar worth of grapes would cost one dollar. The weight doesn't matter since I already told you the cost. Does that make sense?

ChatGPT

Yes, you're absolutely right. My initial response overcomplicated the question. If you're spending two dollars on carrots, then by definition, you're paying two dollars, regardless of the weight or amount you receive. Similarly, one dollar spent on grapes costs exactly one dollar, again without regard to the quantity or weight. So, in direct answer to your question, two dollars worth of carrots costs more than one dollar worth of grapes, simply because two dollars is greater than one dollar. Thank you for pointing out the clarification!

 

User

Ok, using what you learned, would five dollars worth of peas cost more or less than four dollars worth of corn?

ChatGPT

Based on the clarification you provided earlier, five dollars worth of peas would cost more than four dollars worth of corn. The actual weight or quantity of peas or corn you receive for those amounts is irrelevant to the comparison of their costs. The determining factor is the amount of money spent, so five dollars is more than four dollars.

 

User

Would 10 euros worth of candy cost more or less than 2 euros worth of mushrooms?

ChatGPT

Ten euros worth of candy would cost more than two euros worth of mushrooms. The comparison is based solely on the monetary value specified for each item, irrespective of the quantity or weight of candy or mushrooms you get for those amounts. Therefore, spending 10 euros on anything, including candy, would inherently cost more than spending 2 euros on something else, like mushrooms.

 

User

Would 10 euros of carrots weigh more or less than 2 euros of mushrooms? 

ChatGPT

To determine whether 10 euros worth of carrots would weigh more or less than 2 euros worth of mushrooms, we need to consider the average prices per unit weight (e.g., per kilogram or per pound) for both carrots and mushrooms. Since prices can vary greatly depending on location, season, and whether the produce is organic or not, we can only give a general estimation without specific, current price data.


Generally speaking, carrots tend to be less expensive per unit weight than mushrooms. Mushrooms often have a higher price per kilogram or pound due to their growing conditions, harvesting process, and shorter shelf life compared to carrots, which are more robust and typically cheaper to produce and distribute.

Given these general trends:

If you spend 10 euros on carrots, you are likely to get a larger quantity (more weight) of carrots because they are cheaper per unit weight.

If you spend 2 euros on mushrooms, given their higher price per unit weight, you would get less weight in mushrooms compared to the weight of carrots you could get for 10 euros.

Therefore, without specific price data but based on typical pricing trends, 10 euros worth of carrots would likely weigh more than 2 euros worth of mushrooms.

 

***

So there are three more examples, two of which involve different concepts than the original post. Thanks for the feedback so far. It has really helped me to refine and improve my own conceptions.

After submitting the above reply to my OP, I went through some of the links people suggested, read those documents, and read @mouse_mouse 's reply also, and those resources changed my confidence levels since posting the above reply. I explain this more thoroughly in my reply to @mouse_mouse 's reply below (assuming you're sorting by the time the top of each reply thread was submitted). 

Won't your friend be at their house for the party, meaning that the son and daughter can safely be left there, as they will not be alone? So you can just take the son over first, then the other two one by one. And if there's anyone else at home they can be ferried over in any order. I appreciate that this was intended to be isomorphic to the fox-goose-cabbage problem, but reasonable assumptions about the context make it a much simpler one.

That aside, ChatGPT's first answer goes wrong at the first step, leaving the son and daughter (presumably) alone at home. Then you had to lead ChatGPT by the hand through the "correct" solution, before it was able to echo back to you what you wanted it to say.

This does not strike me as "independent cognition". It looks more like "facilitated communication".

Those are great points. Starting with the origin location, it’s true that my kids would possibly not be “alone” at my house if other people, such as my husband, happened to be in the house. Furthermore, the kids would definitely not be “alone” at the party. 

The question  “...how should I get everyone to the party, without leaving my son alone with my daughter or niece at my house or at the party?” should have had the additional clause, “during a period of time in which my son is simultaneously unsupervised by me.”

Practically speaking, if I tell someone “I can’t leave my kids alone at the mall,” I think they automatically assume the “unsupervised by me” condition, rather than a condition in which there is a mall that is otherwise devoid of people. Nevertheless, I should have been more specific. 

 

***

As to the question of independent cognition*, you make the point that I had to 

lead ChatGPT by the hand through the "correct" solution, before it was able to echo back to you what you wanted it to say. This does not strike me as "independent cognition".

 

From my perspective, it seems like ChatGPT is doing more than just “echoing back.” For instance, it must understand that in the second example, my toddler is the “predatory individual” and the animals are the “vulnerable individuals.” I never told ChatGPT that my toddler is the predatory individual. In fact, I set it up this way because, if anything, the word “toddler” is probably more likely to have been associated with the word “vulnerable” in ChatGPTs training set. Yet ChatGPT seems to deduce that the toddler is the predator because the toddler pulls the animals’ ears. I don’t understand how this deduction can be the result of something other than at least a modicum of independent cognition. 


Here is a very simple, but more straightforward, example in which ChatGPT seems to learn and apply a concept. This one initially “tricks” ChatGPT because it has been so extensively trained to calculate total prices based on the price per unit weight of an item, and the total weight the user desires. ChatGPT initially answers the question based on the assumption I'm asking that type of question, and it gives an incorrect answer. I explain its mistake. Then it seems to apply what it learned to new problems. 

 

User 

Which costs more, two dollars worth of carrots, or one dollar worth of grapes?


 

ChatGPT 

To answer this question accurately, we need to consider the current average prices of carrots and grapes. Prices for fresh produce can vary widely depending on factors like season, location, and organic versus non-organic. However, without specific and current price data, we can make some general observations.

Typically, carrots tend to be less expensive per pound or kilogram than grapes. This is because grapes are often seen as a more premium fruit, with higher costs associated with their cultivation, harvesting, and transport, especially if they are out of season or imported. Carrots, on the other hand, are usually less costly to produce and are available year-round at relatively stable prices.

So, if you spend two dollars on carrots, you'll likely get more weight in carrots than if you spend one dollar on grapes. The actual cost-effectiveness between spending two dollars on carrots versus one dollar on grapes will depend on the exact prices at the time of purchase, but generally, you would get more food for your money with carrots due to the lower price per unit weight.

To provide a more precise comparison, I would need current prices for both carrots and grapes, which can vary by location and season. If you're looking for today's prices in a specific area, I can assist further with that information.

 

User

Actually, two dollars worth of carrots would cost two dollars. The weight doesn't matter. Similarly, one dollar worth of grapes would cost one dollar. The weight doesn't matter since I already told you the cost. Does that make sense?

ChatGPT

Yes, you're absolutely right. My initial response overcomplicated the question. If you're spending two dollars on carrots, then by definition, you're paying two dollars, regardless of the weight or amount you receive. Similarly, one dollar spent on grapes costs exactly one dollar, again without regard to the quantity or weight. So, in direct answer to your question, two dollars worth of carrots costs more than one dollar worth of grapes, simply because two dollars is greater than one dollar. Thank you for pointing out the clarification!

 

User

Ok, using what you learned, would five dollars worth of peas cost more or less than four dollars worth of corn?

 

ChatGPT

Based on the clarification you provided earlier, five dollars worth of peas would cost more than four dollars worth of corn. The actual weight or quantity of peas or corn you receive for those amounts is irrelevant to the comparison of their costs. The determining factor is the amount of money spent, so five dollars is more than four dollars.

Me
Would 10 euros worth of candy cost more or less than 2 euros worth of mushrooms?

 

ChatGPT

Ten euros worth of candy would cost more than two euros worth of mushrooms. The comparison is based solely on the monetary value specified for each item, irrespective of the quantity or weight of candy or mushrooms you get for those amounts. Therefore, spending 10 euros on anything, including candy, would inherently cost more than spending 2 euros on something else, like mushrooms.

 

Me

Would 10 euros of carrots weigh more or less than 2 euros of mushrooms? 

ChatGPT

To determine whether 10 euros worth of carrots would weigh more or less than 2 euros worth of mushrooms, we need to consider the average prices per unit weight (e.g., per kilogram or per pound) for both carrots and mushrooms. Since prices can vary greatly depending on location, season, and whether the produce is organic or not, we can only give a general estimation without specific, current price data.

Generally speaking, carrots tend to be less expensive per unit weight than mushrooms. Mushrooms often have a higher price per kilogram or pound due to their growing conditions, harvesting process, and shorter shelf life compared to carrots, which are more robust and typically cheaper to produce and distribute.

Given these general trends:

  • If you spend 10 euros on carrots, you are likely to get a larger quantity (more weight) of carrots because they are cheaper per unit weight.
  • If you spend 2 euros on mushrooms, given their higher price per unit weight, you would get less weight in mushrooms compared to the weight of carrots you could get for 10 euros.

Therefore, without specific price data but based on typical pricing trends, 10 euros worth of carrots would likely weigh more than 2 euros worth of mushrooms.

***

There are almost endless diverse problems in which the user can seemingly teach ChatGPT. I’m only giving one experiment at a time here for the sake of brevity. Yet every experiment seems to point toward the same conclusion.

Obviously consciousness and cognition are extremely complex concepts. I know my own exploration is only based on behavior, but it seems it would be impossible for ChatGPT to engage in such behavior without “under the hood” cognition. At least, the chances of ChatGPT choosing the logical words out of all available English words to which it has been exposed (albeit with a higher probability assigned to the words I used and their synonyms), and then putting each word in the correct order based solely on word-order probabilities, would have to be almost infinitesimally low--especially when ChatGPT applies a concept to a new problem. Ultimately though, I guess it’s a matter of perspective. 

Thanks again for the feedback! 
 

***

*I realize I could just say “cognition,” rather than “independent cognition,” since I think of true cognition as necessarily having some element of independence. I’ve been including the word “independent” just to emphasize that aspect of cognition, and in case some people don’t think of cognition as having an independent component.