I've published a long post titled "Psychological and philosophical issues with AI and what it can teach us about being human". In there I look at technology as a whole, its necessity in the first place and impact on human life in the second. Exploring some historic examples as well as yet to be invented technologies such as the immortality pill, I conclude that technology is a double edged sword.
Here is the summary of the build up to the main point and then the elaboration of the "best case scenario" in verbatim.
Looking at one of the earliest material human technologies:
The consequence of the spearhead (replace with any piece of technology) is twofold: much easier to kill pray (gain something) and much easier to fall pray yourself (lose something).
Then taking the iron plow, as a symbol of modern agriculture, and the ultimate goodie, the immortality pill:
The downsides of new technologies are rarely instant and obvious, but all the more impactful and difficult to reverse.
For every problem technology solves, it creates a new one.
Tools and technology make gaining something easier but they also inevitably make losing something else easier.
Technology only ever solves the problem at hand, but it doesn’t cure the human condition, because if there is such a cure, it is not technological, but psychological.
Then I look at a number of different opinions on the emerging AGI which are diametrically opposed on the dangers involved.
The point is:
The dangers of technology are not inherent to technology, but to us, the people who make and use it. Technology is not dangerous by itself or because of some natural law. It is only dangerous because of human nature.
So what we need protection from is not technology, but our own human nature.
That’s true about all technology except for AI.
The way in which AI is different from all previous pieces of technology is that it doesn’t need to get into the wrong hands, be overused or abused in order to kill all of us. It simply needs to be created.
Then I discuss the worst case scenario briefly, because it's been the focus of most debates and thinking around AI, and I reflect on the idea to halt or ban AI research by comparing it to banning digging at the time of the great gold rush.
Unless it turns out to be technically impossible, superhuman AI is going to happen.
And then here comes my whole point, the best case scenario seems to get no attention at all. The reason why I think it should, is because it is almost as depressing as the worst case, in which we all die.
The reason why we create tools and technology is to eliminate certain pain points in our lives. Most of that pain today is psychological, caused by perceived problems, which only arise in comparison to other people who don’t seem to have them. The big question there is if we could eliminate all pain would we want to?
AI is different from all previous technologies in that, not only does it solve a specific set of problems, but it has the potential solve all of our problems.
At end of the road of superhuman AI development, still going with overwhelmingly positive scenario, nothing remains but a human experience which is free from any kind of pain and difficulty.
All of our material needs and desires are fulfilled instantly, we may even replace difficult humans in our lives by obedient humanoids and we get instant, definitive answers to all of our questions. That’s difficult to imagine, but what else would be at the end of the superhuman AI experiment if it all goes well?
What if we eliminated all pain, imperfection and uncertainty from human experience?
Whether a machine may ever be conscious or not, is a far reaching question. What’s easy to acknowledge is that AI does not need to be conscious, only smarter, more capable and powerful than us, to kill or enslave humanity. If the AI that kills or replaces us in some fundamental way is conscious and has everything we do and a lot more in terms of richness of experience, complexity, intelligence and capability, then it would be difficult to not see it as the next phase of evolution and simply let it happen, and be glad to have been instrumental for its emergence. However, if it’s smarter and more powerful than us, but has no conscious experience it would be a damn shame if it were to replace us. That would not be evolution, but a sad accident. Some smart people who can’t wait to be replaced by AI seem to be missing that point.
But we don’t actually need to go so far as consciousness to get good answers to our earlier questions such as what comprises our essential human nature.
The difference between machine and human based intelligence seems to be quite clear, despite the fact that we know embarrassingly little about the latter. Let’s look at the most salient characteristics of both and contrast them.
Human intelligence and the behavior that originates from it is a result of two subsystems: the rational mind and the intuitive mind. The rational mind works well in controlled situations where the number of variables is limited. That, however, is rarely the case, which is when intuition weighs in and may even overwrite the choices the rational mind is suggesting. We know very little about how intuition works. At times it can save our lives or help us avoid great misfortune, while other times it – or the underlying emotion that seems to masquerade as intuition – can cause us to miss out on great opportunities. In addition, we have the ability to self reflect and course correct next time around whether our logic or intuition failed us. Our intelligence and behavior emerges as the result of these very different psychological functions.
What we can clearly say about human intelligence is that it’s imperfect in all the things it aspires to do and it’s prone to making all kinds of mistakes including predicting the future, learning from the past, pattern recognition, decision making and so on. Consequently, human experience involves pain and uncertainty. Our consciousness, which self-consciousness is an aspect of, makes all of that matter. Avoiding pain is not only a survival tool, but it matters all the more, because we know that pain hurts. Our advanced psychological ability to self reflect, have abstract thoughts and use language has a significant downsite to it. It turns fleeting physical and emotional pain into enduring suffering as a result of identifying with a mental image of the self, which is experienced as separate from, exposed to and often threatened by the rest of the experience, which is thought of as other, not self. On one level we wish some of this away, but from another angle, all of this is beautiful and this is what makes us essentially human.
Artificial intelligence is a very different process entirely. It’s based on yes or no questions. Machines don’t understand or experience colors for instance, they just represent red with different arrangements of black and white than blue. But it’s all black or white to them.
Artificial intelligence makes no mistakes. It doesn’t have the capacity to. The hallucinations of the language models we see today are not mistakes. We perceive them as mistakes, but in fact, the only reason they happen is because the information we fed the model and the way we set it up to process it lead to the results we got. At the most fundamental level, the processors of the computers that run those language models still do nothing else, but process gigantic amounts of yes or no questions. And they never make mistakes. Once they’ve run long enough and got enough feedback from humans as to what people judge as mistakes, those perceived mistakes will disappear entirely.
Without the possibility of errors, there is no room for uncertainty. If there is no uncertainty, there is no future. The future we know exactly, has already happened.
A good way to articulate the difference between human and artificial intelligence is trying to measure the length of the shores of the British Islands or that of any island for that matter. Our human intelligence would struggle with the task, because the shore, where the water meets the land is in constant motion. Our best shot would be to walk around the islands and measure the distance. We’d get a highly inaccurate measurement, which, if repeated, would never match a previous measurement. But it would make sense to and be useful for humans.
Artificial intelligence on the other hand would simply build a concrete wall around the island and having established a clear-cut shoreline, it could easily measure its length. It would destroy the shore all around the isles in the process, but that would only matter to apes like us.
Human intelligence is imperfect, prone to mistakes and mysterious, which results in errors, pain and uncertainty. Artificial intelligence on the other hand can’t make mistakes.
And that’s how we arrive at the answers to our question about what makes us essentially different from machines.
The essence of the human experience is that life can go wrong and when it does, it matters.
You may feel an urge to jump to the conclusion that the essential difference between humans and machines is consciousness. And it is an essential difference, but it’s not definitive. Consciousness is what makes the quality of the experience matter. But if the quality is always flat line perfect, then the other component is still missing. If the potential for experience to be bad is not there, if the potential of pain is not there, it’s still a non-human, undesirable, uninteresting, meaningless experience.
We are still talking about the scenario, in which AI is perfectly aligned with humanity and we only get the good stuff out of it and none of the bad – as unlikely as that sounds.
Even in that dream like scenario, what happens is that we merge with AI one way or another, either physically or maybe even without the need of any major physical intervention, but the point is that harnessing the power of the superhuman AI, we’ll have become an entirely new species, who we can only describe today as some kind of an omnipotent god-like entity, who can only make perfect choices and as a result experience no pain or uncertainty.
If we succeed in making AI our ally, the inevitable long term outcome is that the resulting beings will live in an era of miserable perfection.
Strange as it may sound that I’m equating the essence of human existence to the capacity of experiencing pain, there is something to it. There are innumerable things we could come up with that we feel signify what’s most human about us. Compassion, love, language, art just to name a few. But would any of these things and any of the good things in life retain any meaning if we didn’t have the capacity to lose them? What would we experience, if pain and loss were not part of our experience? Would it make any sense to have desires if they were fulfilled instantly?
We can’t have pleasure without pain. There is no yang without yin. People used to know this thousands of years ago.
Imagine that you are a chess player and you’ve lost the ability to lose in chess. First you might think it’s the best thing that could have happened to you and you’d beat the entire world. How long would you want to go on playing chess after that? That’s exactly how life would feel like if it couldn’t go wrong. Watching the same movie again and again for eternity. Destined to succeed and be happy forever. Does that sound more like heaven or hell?
The only way you could entertain yourself in such a situation, the only reason to go on living would be to build a simulator, and recreate in it a pre AI world so that you’d forget about you are omnipotence and you’d feel like you could make a mistake, your life could end at any point so you’d appreciate it again. You’d put yourself back into pretty much the life you have today with all of its shortcomings and pain points.
Why go through all that trouble to get where we are?
When the first humans hit those rocks together, they wanted to change their environment to get rid of some pain. Hunger to be precise. They rejected a part of the human experience. It served them well, they survived, procreated, hence we are here today. We also reject a part of our experience. It’s not hunger anymore for most of us, and it’s not pain that threatens with the extinction of the human race, but we want to get rid of it nevertheless.
It looks like our effort to get rid of some of our pains, we’ll either make us extinct or loose the essence of who we are by eliminating pain all together.
Far fetched as it sounds to worry about not having pain in our lives, there doesn’t seem to be a stop halfway between these two destinations on the technological train ride. We either die quickly or we merge with AI and become gods.
Today, we are light years away from the state of perfect lives, but we are also a long way from being exposed to the elements like early humans used to be. Life is already too good, too boring, too low stakes and too devoid of challenges for many of us. So we come up with artificial challenges, we chase experiences and consume whatever we can to keep ourselves occupied and superficially satisfied. We tolerate injustice and inequality at a societal level, because our lives are not bad enough to make us revolt and we pick the fights we can easily win or lose without any significant consequence either in virtual reality, playing games or in the real world by identifying as a member of a tribe fighting an opposing tribe either in politics, sports or any other genre of culture.
If we only bothered to look, we’d see that if we actually succeeded in our absurd pursuit of maximizing pleasure and eliminating pain, the only thing we could get is a picture perfect misery.
Technology promises Nirvana, the end of suffering by changing our environment and by turning us into gods. The Buddha, the Taoists, Jesus and many others realized they were already god expressing itself in human form and that pain doesn’t equal suffering so they didn’t work to change their environment to find peace and satisfaction. They worked on transforming the human psyche to awaken to its deepest core and find peace and fulfillment within.
Thinking that we can create a shortcut to heaven on earth with AI seems like the most foolish endeavor we could undertake as a species. The nirvana AI can generate is going to be very different from what most of us have in mind, because it’s based on flawed principles and broken foundations.
Here is what I want to know. Do the people, working on or at least thinking about AI ever play with the best case scenario taken to the extreme both in terms of quality and time? Or am I missing some fundamental aspect of reality, which makes this train of thought absurd?
I've published a long post titled "Psychological and philosophical issues with AI and what it can teach us about being human". In there I look at technology as a whole, its necessity in the first place and impact on human life in the second. Exploring some historic examples as well as yet to be invented technologies such as the immortality pill, I conclude that technology is a double edged sword.
Here is the summary of the build up to the main point and then the elaboration of the "best case scenario" in verbatim.
Looking at one of the earliest material human technologies:
The consequence of the spearhead (replace with any piece of technology) is twofold: much easier to kill pray (gain something) and much easier to fall pray yourself (lose something).
Then taking the iron plow, as a symbol of modern agriculture, and the ultimate goodie, the immortality pill:
The downsides of new technologies are rarely instant and obvious, but all the more impactful and difficult to reverse.
For every problem technology solves, it creates a new one.
Tools and technology make gaining something easier but they also inevitably make losing something else easier.
Technology only ever solves the problem at hand, but it doesn’t cure the human condition, because if there is such a cure, it is not technological, but psychological.
Then I look at a number of different opinions on the emerging AGI which are diametrically opposed on the dangers involved.
The point is:
The dangers of technology are not inherent to technology, but to us, the people who make and use it. Technology is not dangerous by itself or because of some natural law. It is only dangerous because of human nature.
So what we need protection from is not technology, but our own human nature.
That’s true about all technology except for AI.
The way in which AI is different from all previous pieces of technology is that it doesn’t need to get into the wrong hands, be overused or abused in order to kill all of us. It simply needs to be created.
Then I discuss the worst case scenario briefly, because it's been the focus of most debates and thinking around AI, and I reflect on the idea to halt or ban AI research by comparing it to banning digging at the time of the great gold rush.
Unless it turns out to be technically impossible, superhuman AI is going to happen.
And then here comes my whole point, the best case scenario seems to get no attention at all. The reason why I think it should, is because it is almost as depressing as the worst case, in which we all die.
The reason why we create tools and technology is to eliminate certain pain points in our lives. Most of that pain today is psychological, caused by perceived problems, which only arise in comparison to other people who don’t seem to have them. The big question there is if we could eliminate all pain would we want to?
AI is different from all previous technologies in that, not only does it solve a specific set of problems, but it has the potential solve all of our problems.
At end of the road of superhuman AI development, still going with overwhelmingly positive scenario, nothing remains but a human experience which is free from any kind of pain and difficulty.
All of our material needs and desires are fulfilled instantly, we may even replace difficult humans in our lives by obedient humanoids and we get instant, definitive answers to all of our questions. That’s difficult to imagine, but what else would be at the end of the superhuman AI experiment if it all goes well?
What if we eliminated all pain, imperfection and uncertainty from human experience?
Whether a machine may ever be conscious or not, is a far reaching question. What’s easy to acknowledge is that AI does not need to be conscious, only smarter, more capable and powerful than us, to kill or enslave humanity. If the AI that kills or replaces us in some fundamental way is conscious and has everything we do and a lot more in terms of richness of experience, complexity, intelligence and capability, then it would be difficult to not see it as the next phase of evolution and simply let it happen, and be glad to have been instrumental for its emergence. However, if it’s smarter and more powerful than us, but has no conscious experience it would be a damn shame if it were to replace us. That would not be evolution, but a sad accident. Some smart people who can’t wait to be replaced by AI seem to be missing that point.
But we don’t actually need to go so far as consciousness to get good answers to our earlier questions such as what comprises our essential human nature.
The difference between machine and human based intelligence seems to be quite clear, despite the fact that we know embarrassingly little about the latter. Let’s look at the most salient characteristics of both and contrast them.
Human intelligence and the behavior that originates from it is a result of two subsystems: the rational mind and the intuitive mind. The rational mind works well in controlled situations where the number of variables is limited. That, however, is rarely the case, which is when intuition weighs in and may even overwrite the choices the rational mind is suggesting. We know very little about how intuition works. At times it can save our lives or help us avoid great misfortune, while other times it – or the underlying emotion that seems to masquerade as intuition – can cause us to miss out on great opportunities. In addition, we have the ability to self reflect and course correct next time around whether our logic or intuition failed us. Our intelligence and behavior emerges as the result of these very different psychological functions.
What we can clearly say about human intelligence is that it’s imperfect in all the things it aspires to do and it’s prone to making all kinds of mistakes including predicting the future, learning from the past, pattern recognition, decision making and so on. Consequently, human experience involves pain and uncertainty. Our consciousness, which self-consciousness is an aspect of, makes all of that matter. Avoiding pain is not only a survival tool, but it matters all the more, because we know that pain hurts. Our advanced psychological ability to self reflect, have abstract thoughts and use language has a significant downsite to it. It turns fleeting physical and emotional pain into enduring suffering as a result of identifying with a mental image of the self, which is experienced as separate from, exposed to and often threatened by the rest of the experience, which is thought of as other, not self. On one level we wish some of this away, but from another angle, all of this is beautiful and this is what makes us essentially human.
Artificial intelligence is a very different process entirely. It’s based on yes or no questions. Machines don’t understand or experience colors for instance, they just represent red with different arrangements of black and white than blue. But it’s all black or white to them.
Artificial intelligence makes no mistakes. It doesn’t have the capacity to. The hallucinations of the language models we see today are not mistakes. We perceive them as mistakes, but in fact, the only reason they happen is because the information we fed the model and the way we set it up to process it lead to the results we got. At the most fundamental level, the processors of the computers that run those language models still do nothing else, but process gigantic amounts of yes or no questions. And they never make mistakes. Once they’ve run long enough and got enough feedback from humans as to what people judge as mistakes, those perceived mistakes will disappear entirely.
Without the possibility of errors, there is no room for uncertainty. If there is no uncertainty, there is no future. The future we know exactly, has already happened.
A good way to articulate the difference between human and artificial intelligence is trying to measure the length of the shores of the British Islands or that of any island for that matter. Our human intelligence would struggle with the task, because the shore, where the water meets the land is in constant motion. Our best shot would be to walk around the islands and measure the distance. We’d get a highly inaccurate measurement, which, if repeated, would never match a previous measurement. But it would make sense to and be useful for humans.
Artificial intelligence on the other hand would simply build a concrete wall around the island and having established a clear-cut shoreline, it could easily measure its length. It would destroy the shore all around the isles in the process, but that would only matter to apes like us.
Human intelligence is imperfect, prone to mistakes and mysterious, which results in errors, pain and uncertainty. Artificial intelligence on the other hand can’t make mistakes.
And that’s how we arrive at the answers to our question about what makes us essentially different from machines.
The essence of the human experience is that life can go wrong and when it does, it matters.
You may feel an urge to jump to the conclusion that the essential difference between humans and machines is consciousness. And it is an essential difference, but it’s not definitive. Consciousness is what makes the quality of the experience matter. But if the quality is always flat line perfect, then the other component is still missing. If the potential for experience to be bad is not there, if the potential of pain is not there, it’s still a non-human, undesirable, uninteresting, meaningless experience.
We are still talking about the scenario, in which AI is perfectly aligned with humanity and we only get the good stuff out of it and none of the bad – as unlikely as that sounds.
Even in that dream like scenario, what happens is that we merge with AI one way or another, either physically or maybe even without the need of any major physical intervention, but the point is that harnessing the power of the superhuman AI, we’ll have become an entirely new species, who we can only describe today as some kind of an omnipotent god-like entity, who can only make perfect choices and as a result experience no pain or uncertainty.
If we succeed in making AI our ally, the inevitable long term outcome is that the resulting beings will live in an era of miserable perfection.
Strange as it may sound that I’m equating the essence of human existence to the capacity of experiencing pain, there is something to it. There are innumerable things we could come up with that we feel signify what’s most human about us. Compassion, love, language, art just to name a few. But would any of these things and any of the good things in life retain any meaning if we didn’t have the capacity to lose them? What would we experience, if pain and loss were not part of our experience? Would it make any sense to have desires if they were fulfilled instantly?
We can’t have pleasure without pain. There is no yang without yin. People used to know this thousands of years ago.
Imagine that you are a chess player and you’ve lost the ability to lose in chess. First you might think it’s the best thing that could have happened to you and you’d beat the entire world. How long would you want to go on playing chess after that? That’s exactly how life would feel like if it couldn’t go wrong. Watching the same movie again and again for eternity. Destined to succeed and be happy forever. Does that sound more like heaven or hell?
The only way you could entertain yourself in such a situation, the only reason to go on living would be to build a simulator, and recreate in it a pre AI world so that you’d forget about you are omnipotence and you’d feel like you could make a mistake, your life could end at any point so you’d appreciate it again. You’d put yourself back into pretty much the life you have today with all of its shortcomings and pain points.
Why go through all that trouble to get where we are?
When the first humans hit those rocks together, they wanted to change their environment to get rid of some pain. Hunger to be precise. They rejected a part of the human experience. It served them well, they survived, procreated, hence we are here today. We also reject a part of our experience. It’s not hunger anymore for most of us, and it’s not pain that threatens with the extinction of the human race, but we want to get rid of it nevertheless.
It looks like our effort to get rid of some of our pains, we’ll either make us extinct or loose the essence of who we are by eliminating pain all together.
Far fetched as it sounds to worry about not having pain in our lives, there doesn’t seem to be a stop halfway between these two destinations on the technological train ride. We either die quickly or we merge with AI and become gods.
Today, we are light years away from the state of perfect lives, but we are also a long way from being exposed to the elements like early humans used to be. Life is already too good, too boring, too low stakes and too devoid of challenges for many of us. So we come up with artificial challenges, we chase experiences and consume whatever we can to keep ourselves occupied and superficially satisfied. We tolerate injustice and inequality at a societal level, because our lives are not bad enough to make us revolt and we pick the fights we can easily win or lose without any significant consequence either in virtual reality, playing games or in the real world by identifying as a member of a tribe fighting an opposing tribe either in politics, sports or any other genre of culture.
If we only bothered to look, we’d see that if we actually succeeded in our absurd pursuit of maximizing pleasure and eliminating pain, the only thing we could get is a picture perfect misery.
Technology promises Nirvana, the end of suffering by changing our environment and by turning us into gods. The Buddha, the Taoists, Jesus and many others realized they were already god expressing itself in human form and that pain doesn’t equal suffering so they didn’t work to change their environment to find peace and satisfaction. They worked on transforming the human psyche to awaken to its deepest core and find peace and fulfillment within.
Thinking that we can create a shortcut to heaven on earth with AI seems like the most foolish endeavor we could undertake as a species. The nirvana AI can generate is going to be very different from what most of us have in mind, because it’s based on flawed principles and broken foundations.
Here is what I want to know. Do the people, working on or at least thinking about AI ever play with the best case scenario taken to the extreme both in terms of quality and time? Or am I missing some fundamental aspect of reality, which makes this train of thought absurd?
Can't wait to get some reflections.