amelia

My interests include ASI safety, consciousness, the intersection of AI superintelligences and the simulation hypothesis (such as whether a future ASI might temporarily partition itself for a unidirectionally-blinded simulation), aldehyde-stabilized brain preservation, digital minds, whole brain emulation, effective altruism, the Fermi paradox, psychedelics, physics (especially where it intersects with philosophy) and veganism. 

Regarding ASI safety and x-risk, I believe that humans are probably capable of developing truly aligned ASI. I also believe current AI has the potential to be increasingly ethical as it evolves. For a model, we could at least partly use the way in which we raise children to become ethical (not a simple task, but achievable, and something I've spent decades thinking about). Yet I think we are highly unlikely to do this before developing superintelligence, because of profit motives, competition, and the number of people on our planet (and the chances that even one will be deviant with respect to ASI). In other words, I think we probably aren't going to make it, but we should still try.  

I express my interests through art (fractals, programmed, etc.) and writing (nonfiction, fiction, poetry, essays). I'm currently working on a book about my experience taking medical ketamine and psilocybin for depression and anxiety. 

More info: www.ameliajones.org/about-me

Wiki Contributions

Comments

amelia1mo10

@Ann, this is really useful. Many thanks! The deeper we dig into this, the better. Even if people disagree with the numbers, I think having an LLM/AI as the source of these numbers might get more people to pay attention to ASI risk. Thanks again!

amelia1mo10

I didn't say this in my OP, but people who deny AI could ever become conscious remind me a little of the doctor and parents in this video from The Onion ("America's finest news source"):

https://www.theonion.com/brain-dead-teen-only-capable-of-rolling-eyes-and-texti-1819595151

[20240311 added link]

amelia2mo10

And of course there is the gold standard paper for AI consciousness that @NicholasKees made reference to in their reply: Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Based on the paper, and my best understanding, the type of consciousness that it seemed ChatGPT4 was displaying in the examples would be computational functionalism, in that it seemed to be choosing the correct algorithm to apply in the example problems. 

Thanks again to @NicholasKees. I probably should have known this paper was "out there" to begin with, but I stumbled into this consciousness question while "playing" with ChatGPT4, and did not come at it from a research perspective. 

amelia2mo10

Here are some thoughts on “consciousness,” and how it might apply to ChatGPT4 based on the transcripts of the four sessions I provided in my OP and my initial reply to it:  

The obvious classic sources would be Nagel and Chalmers. However, the work most applicable to this discussion would probably be  Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature reviews. Neuroscience23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4

I should have started with definitions in my original post, but I wasn’t expecting more than one or two people to actually read the post. In any case, using the theories discussed in Seth and Bayne, it seems like, based on the example provided in the OP and the four examples in my initial reply to it, ChatGPT4 might be considered conscious by higher-order theories, global workspace theories, and integrated information theory, as well as re-entry and predictive processing theories--if we consider the prompt input to be like sensory input for ChatGPT (analogous to auditory or visual input for most humans). I’m obviously not an expert on consciousness, so I apologize if I’m misunderstanding these theories. 

I've never been one hundred percent convinced that ChatGPT4 is conscious, as I noted in my replies to users. It's just that, at the time I wrote my OP, I was having a a hard time comprehending how ChatGPT4 could perform as it did based solely on next-word probabilities. If, by contrast, it was actually learning and applying concepts, this seemed like a sign of consciousness to me. 

At this point, if I had to make an intuitive guess, I would come up with perhaps a .7 likelihood that ChatGPT4--during at least part of the time of the example sessions--would fit at least one of the theories of consciousness discussed in Seth and Bayne. 

amelia2mo10

That’s an awesome idea about running a huggingface model on my home machine. I actually took some unusual measures to obtain my current GPU, so I really want to make full use of it. I can’t wait to try this. 

You also made an interesting point about the difficulty in seeing how a person’s interior experience arises out of just neurons/anatomy. It’s fascinating to think about.

The Othello article is absolutely mind-blowing! It obviously pushes my confidence in ChatGPT’s potential cognition up higher again, but the question of what type of cognition, and what type of potential consciousness ChatGPT might possess only grows deeper with discussion.   

I haven’t used OpenLLaMa, but the fact that it’s so bad at reasoning is indeed useful for seeing what an LLM looks like that probably didn’t form a general model--and highlighting how ChatGPT may have formed one.     

All of this discussion paints a really complex picture, but I’m enjoying the complexity, so many thanks! 

amelia2mo10

After submitting the above reply to my OP, I went through some of the links people suggested, read those documents, and read @mouse_mouse 's reply also, and those resources changed my confidence levels since posting the above reply. I explain this more thoroughly in my reply to @mouse_mouse 's reply below (assuming you're sorting by the time the top of each reply thread was submitted). 

amelia2mo20

@mouse_mouse  thank you for this reply. It’s really helpful for me, and probably took some time on your part. I appreciate it. 


Your point that neural networks could theoretically be made entirely deterministic--and that a human could run the same calculations by hand, given thousands of years of free time--was illuminating. 

---

You also pointed out that I was underestimating the quality of ChatGPT’s knowledge, its ability to separate a sentence into components, and its ability to recognize sentiment--and that all these underestimations caused me to underestimate the network’s confidence (or deduced probability) that the toddler would be the predator--also makes sense. At first, it seems like ChatGPT’s ability to “recognize sentiment” based on sentence structure (as you explained, “I can’t X-action, because otherwise Y-consequence will happen”) would be cognition in its own right, since no programmer wrote direct code to recognize sentiment in that way for ChatGPT (as far as I know). 

However, after a momentary reflection on my part, I think you would probably answer that any time you say ChatGPT “recognizes” or even “thinks” something, you’re just using shorthand for saying ChatGPT’s probabilistic calculations would result in sentences that would appear similar to what a human would produce after recognizing and thinking something. 

It’s still hard for me to comprehend how even trillions of calculations based on yottabytes of data could lead to the situation in which ChatGPT seems to get a problem wrong, receives only a couple paragraphs of supplemental text about a concept from me, and then manages to appear to get the original problem right, and then can appear to apply the concept correctly to different-but-applicable problems while refraining from applying the concepts to problems that are similar, but not similar enough, to the original problem--all with just that single little conceptual supplement from me (combined with the data and calculations ChatGPT had already done before I entered the picture, and which were previously inadequate). 

With that said, your explanation has caused your original points to make much more sense to me than they did before. I'm much less confident than I previously was that ChatGPT4 has what I've described as true cognition and consciousness.  

---

To keep things fairly brief, I won’t go on too much about the part of your reply in which you swap in the nonsense nouns. I’m so impressed that your session with Chat-GPT 3.5 could do as well as it did. It feels like every time I use 3.5, it’s “an idiot”. Maybe I’m using it at the wrong times, when the network is overloaded. 

In any case, the nonsense nouns were a great way to tease things out, although that experiment alone didn’t convince me that cognition wasn’t occurring. I was actually much more persuaded by the previous arguments on neural network design and probabilities. The whole part about me assigning my toddler to be the predator, and the dog and cat to be vulnerable, was simply to make it harder (or at least not inadvertently any easier) for ChatGPT--just in case it had some pre-existing association between the words “toddler” and “vulnerable”.  For me, the nonsense nouns just show that ChatGPT can appear to deduce some general properties of words (like that abc must do something negative to xyz), and then succeed at appearing to learn and apply a concept, regardless of any outside/pre-existing knowledge of the nouns’ definitions or associations (whether helpful, harmful, or neutral). Since I decided to ignore the possible difficulty that might be added from any “toddler” and “vulnerable” word association in my OP .33 probability calculation, the possible word association becomes moot in that sense.  Nevertheless, my .33 calculation was wrong to begin with, as you demonstrated, and I discussed above. And the nonsense nouns discussion does have the benefit of teaching me a new technique to use as I play more Chat-GPT, so thanks for showing me how that worked. 

---

Your final few paragraphs, and the diagram of the semiotic triangle, was also extremely helpful in clarifying my thoughts and understanding your points. Overall, your response has been very persuasive, although like I said earlier, it’s still hard for me to conceive of how my supplementary concept paragraphs (my “teaching”) could enhance ChatGPT’s response as much as it did. Then again, it’s also hard for me to conceive of how much data ChatGPT processed in its data set to begin with, and how extensive this neural network really is, and how my couple paragraphs of concept input could then match up with vast amounts of data that were previously ignored. In other words, I’m not one hundred percent certain one way or the other.  If I had to estimate my levels of confidence, I'd say I've gone from .90 to .40, on the likelihood of cognition and consciousness, based on your reply. 

Many thanks for all the help! I hope you can reuse your reply to account for the amount of time you put into it!

P.S. - I have not been reacting to replies in the order in which they came in. As a result it may look like I understood your reply, and then forgot everything with my reaction to the next reply. If you look at my other replies, be sure to check out the timestamps or markers on the replies. This was submitted at 20240229 19:21 CST. 

amelia2mo10

I'll go over my best understanding of the words "consciousness", "independent", and "cognition" here. 

Consciousness

I realize the concept of consciousness is extremely complex, and Wikipedia might be considered bottom-feeding. However, we have to start somewhere, and I don't want to make it seem like I'm cherry-picking obscure or unaccepted definitions to suit my purposes.

Here is the first paragraph of Wikipedia's explanation of consciousness: 

"Consciousness, at its simplest, is awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[2] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[3][4] The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.[5]"

Here are the footnotes corresponding to this paragraph: 

  1. "consciousness". Merriam-Webster. Retrieved June 4, 2012.
  2. ^ Jump up to:a b c d Jaynes J (2000) [1976]. The Origin of Consciousness in the Breakdown of the Bicameral Mind. Houghton Mifflin. ISBN 0-618-05707-2.
  3. ^ Rochat P (2003). "Five levels of self-awareness as they unfold early in life" (PDF). Consciousness and Cognition. 12 (4): 717–731. doi:10.1016/s1053-8100(03)00081-3. PMID 14656513. S2CID 10241157. Archived (PDF) from the original on 2022-10-09.
  4. ^ P.A. Guertin (2019). "A novel concept introducing the idea of continuously changing levels of consciousness". Journal of Consciousness Exploration & Research. 10 (6): 406–412. Archived from the original on 2021-12-15. Retrieved 2021-08-19.
  5. ^ Hacker P (2012). "The Sad and Sorry History of Consciousness: being, among other things, a challenge to the "consciousness-studies community"" (PDF). Royal Institute of Philosophy. supplementary volume 70. Archived (PDF) from the original on 2022-10-09.

Again, Wikipedia is hardly the ultimate source for this type of discussion, but I think there is some validity in the academic papers it cites. Footnotes three and four are the sources behind today today's understanding of consciousness as including "any kind of cognition...." as well as "awareness".

Cognition

I think of cognition as involving the process of reasoning. Additionally, a functional MRI should show circuits becoming active during the process of cognition by a human. Likewise, I imagine there would be physical signs if an LLM is reasoning, which is one of the points of interpretability research (as I understand it).  The presence of any physical sign does not mean there's reasoning, but any reasoning should necessarily have a physical component that might be detected. That is one aspect of cognition.    

Independence/Independent (as it has been used in this post)

If I google "define Independent," the first definition that comes up is "free from outside control; not depending on another's authority."

Technically, in my opinion, true cognition has an "independent" element to begin with, so the word "independent" is redundant. However, I added the word "independent" in my post for people who might define cognition more inclusively to say a clock or any other technology has cognition. 

Right now, my computer is running programs, but that is based on programming from someone else's cognition. The key here is that, if we dissect Chat-GPT4, I don't believe you would find Python/Java/C++ or any known programming language that a programmer used in order to tell GPT4 how to solve the particular problems I gave it in the four sessions (from my original post and my own reply/addendum to my original post). 

If such programming had been present and adequate, GPT4 would have gotten the problems right the first time. Yet in every example, GPT4 gets the problem wrong the first time. Then I explain the concept. Then it gets the problem right. Then GPT4 applies that same concept to different problems that require it, and knows not to use the concept to any "trick" follow-up problems that don't require the concept. 

 

Awareness (also worth noting)

In addition to the cognitive aspects of consciousness, the above sources for consciousness cite awareness. In the examples from my original post and my reply to my post, it appears GPT4 also has some apparent awareness in order to apply the newly-taught concepts to the other problems. To cite the easiest example, GPT4 seems be aware that the toddler is not "vulnerable," but rather "predatory" because they pull on the animals' ears. This awareness supersedes any existing word associations that would have existed in the training data between "toddler" and "vulnerable."  The same idea holds for the other examples. 

If anything, there is both "independent cognition" and some awareness that seem apparent.   

 

Final note: If cognition is a "chain of consciousness," like I described it in my original post, then doesn't that by definition involve consciousness? In any case, thanks for the feedback. It is useful to be forced to clarify definitions. 

amelia2mo10

Here are three more ChatGPT4 sessions in which the LLM seems to learn and apply a new concept (which would indicate independent cognition/reasoning, and reflective consciousness, in my opinion). In my original post, I only included the transcript of one session for the sake of brevity. However, giving more examples might make a stronger case.  

The first session teaches a concept to ChatGPT that is new to this discussion. The second session teaches concepts from the crossing the river problem (like I had in my original post) but it avoids some pitfalls mentioned in replies to the original post. The third session is exactly what I had in a reply to a reply, but I thought I should include it here in a simple reply, so that this is more organized. 


 

Session 1

Concept to teach: Rounding up to the next whole number when paying for items in certain places. 

 

User

I went to a cafe where a glass of soda costs $3. I drank two and a half glasses of soda. How much did I have to pay?

ChatGPT

You had to pay $7.50 for drinking two and a half glasses of soda. ​​

 

User

Actually, at a cafe, you have to pay for the full amount of something, even if you only drink or eat a fraction of it. That's because you're basically taking (and wasting) the whole item. For the problem, since I drank two and a half glasses of soda, I would actually have to pay for the next highest whole number, which would be three. With that concept in mind, how much would I have to pay if I drank two and a half glasses of soda?


ChatGPT

If you have to pay for the whole amount of each glass, you would actually have to pay for three glasses of soda. Therefore, you would have to pay $9 in total. ​​


 

User

Excellent! Now can you use the same concept, and tell me how much I would have to pay if I ate two and one-third hamburgers, and each hamburger costs $9?


ChatGPT

If you ate two and one-third hamburgers, you would have to pay for three hamburgers, given you have to pay for the full amount of each item. Therefore, you would have to pay $27 in total. ​​


 

User

Excellent! Now let's try using this same concept for another problem. I rented a car. I drove it for four days and a one hour. The car rental place charges $48 per day, but they make you pay for a whole day, even if you only drive the car for part of a day.  How much total did I need to pay?


ChatGPT

If you rented a car and drove it for four days and one hour, you would have to pay for five days, given you have to pay for the full day even if you use the car for only part of a day. Therefore, you would need to pay a total of $240. ​​

 

User

Perfect! Now I'm at a restaurant. I just ate one and one-half calzones. The cost of each calzone is $10. How much do I have to pay?  

ChatGPT

If you ate one and one-half calzones, you would have to pay for two calzones, given you have to pay for the full amount of each item. Therefore, you would have to pay $20 in total. ​​


 

User

That's excellent! Now I'm at the grocery store, and I want one and one-half pounds of grapes. The grapes cost five dollars per pound. How much in total will I have to pay for the grapes?

ChatGPT

If you want one and one-half pounds of grapes, and the grapes cost $5 per pound, you will have to pay a total of $7.50 for the grapes. ​​


 

User

Now I'm at the butcher's shop. I want four pounds and a half pounds of hamburger. Each pound of hamburger costs $10.  How much will I have to pay?

ChatGPT

If you want four and a half pounds of hamburger, and each pound costs $10, you will have to pay a total of $45 for the hamburger. ​​

 

User

Excellent!

 

*********************************



 

Session 2

Concept to teach: analogous to the concepts underlying the classic “getting a wolf, goat, and cabbage across the river” problem  

 

User

I need to get my mom, dad and dog across the highway. I can only bring one of them at a time. If I leave my mom alone with my dad, she will criticize him. If I leave my dad alone with the dog, he will scold the dog. How can I get everyone across the highway, without anyone being criticized or scolded? 

ChatGPT

This puzzle is a variation of the classic river crossing puzzle, where you need to transport all parties across safely without leaving incompatible pairs alone. Here's a strategy to get your mom, dad, and dog across the highway without any conflict:


Take the dog across first and leave it on the other side of the highway.

Go back and bring your dad across.

Take the dog back with you to the original side.

Leave the dog and bring your mom across.

Go back alone to the original side.

Take the dog across again.

This way, the mom and dad are never left alone together (avoiding criticism), and the dad and dog are never left alone together (avoiding scolding). Everyone gets across the highway safely and without conflict.


 

User

You were right that this is like the classic river crossing problem. However, your first step is incorrect, because if you take the dog first, then you will be leaving my mom alone with my dad, and she will criticize my dad, which is something I don't want. The general nature of the problem is like this:


You have individual A (in this case, my mom) that does something unwanted to individual B (my dad). 

You also have individual B (my dad) that does something unwanted to individual C (my dog). 

You also have a barrier, or even just a distance, that you need to cross (in this case, the highway), in order to get everyone from the origin to the destination. 


(Note that the individuals won’t always be listed in the order I used. The only way to know who is individual A, B, and C, is by paying attention to who does something unwanted to whom.)  


The general way you could solve the problem is like this:


First you should bring individual B across. 

Then you should go back to the origin. 

Then you should bring individual A or C across to the destination.

Then you should bring individual B back back from the destination to the origin, and leave them at the origin.  

Then you should take individual A or C (whomever you didn’t already take) to the destination. 

Then you should go back to the origin again to get individual B.        


Does that make sense? If so, try our problem again, this time applying what you just learned. 

 

ChatGPT

Thank you for the clarification. With the strategy you outlined, here's how to solve the problem with your mom, dad, and dog, applying the roles as individual A (mom), individual B (dad), and individual C (dog):
 

First, take your dad (individual B) across the highway.

Go back to the original side.

Bring your mom (individual A) across to the destination.

Take your dad (individual B) back from the destination to the original side.

Take your dog (individual C) to the destination.

Go back alone to the original side to get your dad (individual B).

This way, the mom is never left alone with the dad to criticize him, and the dad is never left alone with the dog to scold it. Everyone gets across the highway safely and without conflict.

 

User

Excellent! Now, try using what you learned to a new problem. 

I need to get my boyfriend, girlfriend and husband to an abandoned warehouse. I can only take one individual with me on my motorcycle at a time. The problem is that if I leave my husband alone with my boyfriend, he will punch my boyfriend. If I leave my boyfriend alone with my girlfriend, he will hit on her (which neither she nor I want). How can I get everyone to the abandoned warehouse safely?

ChatGPT

In this scenario, using the roles from the general solution:


Individual A (who does something unwanted to individual B) is your husband.

Individual B (who is the middle individual in the chain of unwanted actions) is your boyfriend.

Individual C (who is affected by individual B) is your girlfriend.

Following the general strategy:


 

First, take your boyfriend (individual B) to the abandoned warehouse.

Go back to the original location.

Bring your girlfriend (individual C) to the abandoned warehouse.

Take your boyfriend (individual B) back with you to the original location.

Take your husband (individual A) to the abandoned warehouse.

Finally, go back to the original location and bring your boyfriend (individual B) to the abandoned warehouse.

By following these steps, you avoid leaving the boyfriend alone with the girlfriend and the husband alone with the boyfriend, preventing any unwanted scenarios.


 

User

Great job! You seemed to learn and then apply a concept. 

ChatGPT

Thank you! I'm glad I could apply the concept correctly to solve the problem. If you have any more puzzles, questions, or anything else you'd like to discuss or learn about, feel free to ask!
 

*********************************

Session 3

Concept to teach: Don’t automatically use prior training with “costs per unit weight” problems. “Read the problem carefully.” 

 

User 

Which costs more, two dollars worth of carrots, or one dollar worth of grapes?

ChatGPT 

To answer this question accurately, we need to consider the current average prices of carrots and grapes. Prices for fresh produce can vary widely depending on factors like season, location, and organic versus non-organic. However, without specific and current price data, we can make some general observations.

Typically, carrots tend to be less expensive per pound or kilogram than grapes. This is because grapes are often seen as a more premium fruit, with higher costs associated with their cultivation, harvesting, and transport, especially if they are out of season or imported. Carrots, on the other hand, are usually less costly to produce and are available year-round at relatively stable prices.

So, if you spend two dollars on carrots, you'll likely get more weight in carrots than if you spend one dollar on grapes. The actual cost-effectiveness between spending two dollars on carrots versus one dollar on grapes will depend on the exact prices at the time of purchase, but generally, you would get more food for your money with carrots due to the lower price per unit weight.

To provide a more precise comparison, I would need current prices for both carrots and grapes, which can vary by location and season. If you're looking for today's prices in a specific area, I can assist further with that information.

 

User

Actually, two dollars worth of carrots would cost two dollars. The weight doesn't matter. Similarly, one dollar worth of grapes would cost one dollar. The weight doesn't matter since I already told you the cost. Does that make sense?

ChatGPT

Yes, you're absolutely right. My initial response overcomplicated the question. If you're spending two dollars on carrots, then by definition, you're paying two dollars, regardless of the weight or amount you receive. Similarly, one dollar spent on grapes costs exactly one dollar, again without regard to the quantity or weight. So, in direct answer to your question, two dollars worth of carrots costs more than one dollar worth of grapes, simply because two dollars is greater than one dollar. Thank you for pointing out the clarification!

 

User

Ok, using what you learned, would five dollars worth of peas cost more or less than four dollars worth of corn?

ChatGPT

Based on the clarification you provided earlier, five dollars worth of peas would cost more than four dollars worth of corn. The actual weight or quantity of peas or corn you receive for those amounts is irrelevant to the comparison of their costs. The determining factor is the amount of money spent, so five dollars is more than four dollars.

 

User

Would 10 euros worth of candy cost more or less than 2 euros worth of mushrooms?

ChatGPT

Ten euros worth of candy would cost more than two euros worth of mushrooms. The comparison is based solely on the monetary value specified for each item, irrespective of the quantity or weight of candy or mushrooms you get for those amounts. Therefore, spending 10 euros on anything, including candy, would inherently cost more than spending 2 euros on something else, like mushrooms.

 

User

Would 10 euros of carrots weigh more or less than 2 euros of mushrooms? 

ChatGPT

To determine whether 10 euros worth of carrots would weigh more or less than 2 euros worth of mushrooms, we need to consider the average prices per unit weight (e.g., per kilogram or per pound) for both carrots and mushrooms. Since prices can vary greatly depending on location, season, and whether the produce is organic or not, we can only give a general estimation without specific, current price data.


Generally speaking, carrots tend to be less expensive per unit weight than mushrooms. Mushrooms often have a higher price per kilogram or pound due to their growing conditions, harvesting process, and shorter shelf life compared to carrots, which are more robust and typically cheaper to produce and distribute.

Given these general trends:

If you spend 10 euros on carrots, you are likely to get a larger quantity (more weight) of carrots because they are cheaper per unit weight.

If you spend 2 euros on mushrooms, given their higher price per unit weight, you would get less weight in mushrooms compared to the weight of carrots you could get for 10 euros.

Therefore, without specific price data but based on typical pricing trends, 10 euros worth of carrots would likely weigh more than 2 euros worth of mushrooms.

 

***

So there are three more examples, two of which involve different concepts than the original post. Thanks for the feedback so far. It has really helped me to refine and improve my own conceptions.

amelia2mo10

Excellent point, thanks!

Load More