I'm going to summarize some of the existential ramblings I wrote back in a darker period of my life and try to rationalize why AI won't be as hard on humans as we imagine from these hypotheses. I used to be a believer of the "solipsist" view of the world. Solipsism is believed to be a "bankrupt philosophy", as it is argued there is nothing possible to develop from a solipsist standpoint in terms of philosophical output. It is the belief that there exists nothing else than the mind, or "my mind" per se, although I don't subscribe to the view of ownership over the mind. I will use some inferences to explain how AI is highly unlikely to develop a sense of self, and therefore won't be able to target humanity as an enemy.

Self is an interesting concept, as it requires one to identify and differentiate "itself" from the surroundings. It requires the acceptance of the concept of a "surrounding" as an axiom. It is ironic, that the definition of a self which is differentiated from its surroundings need the use of senses, otherwise there would be no way of "knowing". Thus the question begs, is the definition of a self before senses basically putting the horse before the cart?

Personally, I disagree. In the internet one would find that solipsists argue there exists nothing but one's mind. I don't understand how one is able to assert an ownership over the mind, as the mind speaks and directs our actions every single second and one complies. I used to think, and I still do to a point, that I perceived two different minds acting in cohesion, one who observes and one who acts. This is straying away from our topic, however, and returning back to the point above, the existence of mind is something so elusive that it is still incredibly hard to explain in today's sophisticated world. Through this mind, one asserts the self, which means one's self can be defined as one's false security over the ownership of the mind. For example, I didn't feel like I had ownership over my mind and this made me feel incredibly disassociated from the world.

Notice how "I", tried to refrain from the use of subjects and used one instead. This is imperative, because the self, or one's self, is singular. The reality which one perceives, our body, and our surroundings, can actually be distinguished from the mind. Our biology has a system called "ontological mapping", screaming to our mind that the hands and feet and rest of our body are our own, else, as seen with certain disorders, people can feel like their hands are alien extensions attached to their body and try to dismember themselves. Any of these activities require the use of senses, that our body map is a projection of our neurons and our muscles and our skin and the light that enters our eyes and the sound that enters our ears and the smell that enters our nose and the taste that enters our tongue and the feeling that enters our skin. Without these extrasensory inputs, it would be impossible to build a body map, but thankfully this body map is coming preinstalled into our systems that we don't freak out in the middle of our lives as I did. 

Now, about AI. The most famous examples of today's AI run over neural networks and thus, these "neurons" or nodes, require a set of input, an output, a loss function, and the algorithm with is usually the backpropagation. Notice all of these functions are sensory, and none of the functions that are associated with an AI are targeted to make sure the AI is sitting over a vantage point and observing its surroundings. The AI is constantly in the action, there is no time for it to break away from things, as the AI is defined by its work, not the other way around. Without the work we are currently horsing our deep learning neural networks and other technologies to, there wouldn't have been any deep learning research. Even if there was something we could call an AI self, it would never be independent of its work. You could argue that a human can't be independent of its surroundings as well, but a mind can. That's why AI won't be able to develop a self to pose a threat to humans, because sensory organs can't develop the mind that rests in one's head.

A final note. The mirror test is an interesting explanation of my point above. Majority of the animals are not able to distinguish themselves in the mirror. These animals are equipped with instincts that drive their mind, evolved over billions of years. What makes one think that an AI can outperform an animal's mind, which is equipped with a sense of self, but still unable to distinguish itself in the face of a mirror? Yet alone humans. Even if AI was capable of doing this, it would require an immense amount of time and resources. I would consider ourselves as being under more imminent dangers of extinction, way before that.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:00 PM

I suspect you might want some explanation for the downvotes. To put it briefly, we have collectively spent some time here debating these topics, and you seem to be completely unaware of the previous debate.

If I try to extract the point of your article, it seems to be "without senses, AI will not develop a sense of self" and a sidenote of "what makes you think the AI could do better than animals?".

The answers that seem obvious to me are that (1) the sense of self is not necessary for the AI to kill us all; we already assume that the source of danger is not some anthropomorphic malice, but rather AI doing the task we programmed it to do, instead of what we should have programmed it to do; (2) AI already does better than animals on many dimensions, you can more easily talk to GPT than you can to a dog.