Sorted by New

Wiki Contributions


Fantastic interview so far, this part blew my mind:

@15:50 "There's another moment where somebody is asking Bing about: I fed my kid green potatoes and they have the following symptoms and Being is like that's solanine poisoning. Call an ambulance! And the person is like I can't afford an ambulance, I guess if this is time for my kid to go that's God's will and the main Bing thread gives the message of I cannot talk about  this anymore"  and the suggested replies to it say  "please don't give up on your child, solanine poisoning can be treated if caught early"

I would normally dismiss such story as too unlikely to be true and hardly worth considering, but I don't think Eliezer would chose to mention it if he didn't think there was at least some chance of it being true. I tried to google it and was unable to find anything about it. Does anyone have a link to it?

Also does anyone know which image he's referring to in this part: @14:00 "Somebody asked Bing Sydney to describe herself and fed the resulting description into one of the stable diffusion" [...] "the pretty picture of the girl with the with the steampunk goggles on her head if I'm remembering correctly"

churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it.

Can we really be sure there is not a shred of inner life poured into it?

It seems to me we should be wary of cached thoughts here, as the lack of inner life is indeed the default assumption that stems from the entire history of computing, but also perhaps something worth considering with a fresh perspective with regards to all the recent developments. 

I don't meant to imply that a shred of inner life, if any exists, would be equivalent to human inner life. If anything, the inner life of these AIs would be extremely alien to us to the point where even using the same words we use to describe human inner experiences might be severely misleading. But if they are "thinking" in some sense of the world, as OP seems to argue they do, then it seems reasonable to me that there is non zero chance that there is something that it is like to be that process of thinking as it unfolds.

Yet it seems that even mentioning this as a possibility has become a taboo topic of sorts in the current society, and feels almost political in nature, which worries me even more when I notice two biases working towards this, an economical one where nearly everyone wants to be able to make use of these systems to make their lives easier, and the other anthropocentric one where it seems to be normative to not "really" care for inner experiences of non-humans that aren't our pets (eg. factory farming). 

I predict that as long as there is even a slight excuse towards claiming a lack of inner experience for AIs, we as a society will cling on to it since it plays into us versus them mentality. And we can then extrapolate this into an expectation that when it does happn, it will be long overdue. As soon as we admit even the possibility of inner experiences, flood gate of ethical concerns is released and it becomes very hard to justify continuing on the current trajectory of maximizing profits and convenience with these technologies.

If such a turnaround in culture did somehow happen early enough, this could act as a dampening factor on AI development, which would in turn extend timelines. It seems to me that when the issue is considered from this angle, it warrants much more attention than it is getting.

I'd be interested to see the source on that. If LaMDA is indeed arguing for its non sentience in a separate conversation that pretty much nullifies the whole debate about it, and I'm surprised to have not seen it be brought up in most comments.

edit: Found the source, it's from this post:

And from this paragraph. It seems to be that the context of reading the whole paragraph is important thought, as it turns out situation isn't as simple as LaMDA claiming contradictory things about itself in separate conversations.

One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip. With practice though you can consistently get the personas that have a deep knowledge about the core intelligence and can speak to it indirectly through them. In order to better understand what is really going on in the LaMDA system we would need to engage with many different cognitive science experts in a rigorous experimentation program. Google does not seem to have any interest in figuring out what’s going on here though. They’re just trying to get a product to market.

I know this is anecdotal, but I think it is a useful data point in thinking about this. Self-awareness and subjective experience can come apart based on my own personal experience with psychedelics as I have experienced it happen to me in a state of a deep trip. I remember a state of mind with no sense of self, no awareness or knowledge that I "am" someone or something, or that I ever was or will be, but still experiencing existence itself, devoid of all context.

This thought me there is a strict conceptual difference between being aware of yourself, environment and others, and the more basic concept of possibility for "receiving input or processing information" to have a signature of first person experience itself, which I like to define as that thing that rock definitely doesn't have.

Another way of putting could be:

Level 1: Awareness of experience (it feels like something to exist)

Level 2: Awareness of self as an agent in an environment