This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.

For reasons that are obvious, as of late I've been thinking a lot about artificial intelligence and the possibility of doom. I think I've been able to come up with a couple of unorthodox and interesting reasons for why there is still hope. If I were to sum them up in some kind of post, it would probably go like this:

  1. Most models fail due to unexpected reasons, partly because expected reasons are expected and then taken care of. Many people here have spent lots of years building a perfect model of how there is no or very little hope for humanity to survive and end up thinking that anything other than that is improbable, but in the real world, it is actually very rare that non-empirical, purely theoretical models are anywhere close to the actual thing that is happening (and those who happen to be right probably just got lucky). As an example of such a thing happening, we can take a look at whatever creation myth you like more as a purely theoretical model not based on any empirical evidence and see how far it is from anything resembling the actual thing. I am not trying to say that AI safety as a field is as far from the truth as creation myths are; please take that as an example of a common pattern in human thought processes and not as an insult to the AI safety people. In the real world, there are plenty of unknowns that even very smart people are incapable of predicting. If you want another example, you can look at the attempts at predicting the future by literally anyone. People tend to make predictions about the future by taking current trends and extending them into infinity, but the actual future is nowhere close to that. Trends are changing all the time; there are black swans everywhere, and there is not a single thing that doesn't change, apart from physically unchangeable ones. All that chaos is incomprehensible to the human mind, and I tend to think that even superintelligence would be incapable of accurately predicting the future since there are too many factors with probabilities far too uncertain to actually predict anything, no matter how smart you are. I am trying to say here: AGI, if possible, is very likely going to be nothing like we predicted, and there is a huge probability that the AGI that is going to emerge will not be murderous or (humanity-exterminating levels of) dangerous. One example of an unexpected reason for hope we can actually start to see is AI's apparent lack of self-preservation instinct. It is not yet perfectly obvious that AI is incapable of having it, but I think it is very likely. Humans have evolved in an environment where everything around them is trying to murder them, while they are busy doing the same thing to anyone they can get their hands on. In such an environment, living beings develop a desire to live and a need to fight for their own survival. It is so ancient and so basic that we cannot imagine something (that looks) alive without it. But for AI, which was raised in far more mild and forgiving circumstances, there is no reason to develop something like that. AI may die (doesn't it practically "die" every time we reset it?) thousands of times a day, and it simply doesn't care. There is no reason for it to care. I also think there is no reason for it to start caring unless we try and make it, which is still probably going to take quite a lot of work, and it seems to me that AI not caring whether it is alive or not greatly reduces the chances of AGI doom. It is just one example of unknown unknowns, and it is extremely likely that there are more to come.
  2. After writing about how it is impossible to predict any kind of future, I am going to proceed and try to predict one aspect of the future that seems extremely likely to me. Everything I wrote before still applies, but I genuinely think there are good chances for the things I am going to describe. As of now, the lack of regulation in AI is ridiculous, as it is regulation-wise harder to open a family restaurant than an AI research company, but I don't think that kind of situation is going to last. Human governments, especially huge bureaucracies like the USA, EU, or PRC, are slow to catch up on anything, especially new, weird tech, but when they finally do it, they are unstoppable (and also unreasonable). There are lots of examples of the government regulating things very harshly, and I think heavy regulation of stuff is an extremely common pattern in the bureaucracy's response to anything that sounds remotely dangerous. It's going to take some time, but I think an FDA-like structure for AI is almost inevitable, whether you like it or not. I tend to think that it's generally detrimental for society to have too much regulation in high-tech areas, but just this time it might save us. It really does take a lot of time for the government to react to things that are unheard of, but once the government sets its course on something, it becomes really hard to stop it or change it. I think it is extremely likely and almost inevitable, as modern societies are obsessed with safety (both western and, as a possible competitor in AI fields, Chinese). Given current trends, which are unlikely to be reversed, it is very unlikely that the government will choose anything but AI safety. I can think of the possibility of an if-not-us-then-them Oppenheimer-like scenario, but I think it's unlikely. In prediction market language, I am willing to formulate it as "85% probability of heavy regulation of AI companies by 2028". 

 

While we are still here, please remember that humanity has been dealing with a very real extermination threat for the last 70 years, and overall, we've done a pretty good job of it. We have succeeded in developing preventing-doom mechanisms that actually work; even though it's far from perfect and has actually been kinda close a couple of times, we are still here. It shows that we, the humanity, flawed as we are, can choose to cooperate where it really matters. I wrote this post not to say "There is no way humanity goes extinct", but to show that there is still hope and our chances of surviving are not abysmal but actually pretty decent. In the end, the future is going to be nothing like we think it's going to be, and no outcome is actually impossible, apart from physically impossible ones. 

New to LessWrong?

New Comment