It's great that everybody's talking about AI now, since it's the most important thing in the world. But sadly, people keep talking past each other.
When I got into the field of deep learning in 2012, nobody “serious” would use the term “AI” — it was understood to mean some future sci-fi technology. Something smart like people and as smart as people. The “AI” that we were building wasn’t the real AI, it was just pattern-matching, machine learning.
For many people, “AI” is the new technology that's come out in the last 2.5 years, basically, ChatGPT, its competitors, and its successors. Is this AI that exists today the real AI?
No. We’re not there yet. But the real AI is coming. I think we’ve got about 5 years.
The AI we have today can do incredible things, things almost nobody would’ve dreamed of in 2012. And it’s important to understand the implications of current technology.
But the real AI is way, way, way more important. I think it’s going to lead to human extinction, probably within a few years of its development. At best, it will cause near-total unemployment.
This is where AI is headed, and experts who downplay or distract from this betray the public’s trust. Those that do so knowingly and deliberately are traitors to humanity. We’re supposed to be reassured that the real AI is decades or (now) “at least several years” away. But nobody has a plan for what happens when we get there.
It’s not a secret that AI might wipe us out. I’ve been sounding the alarm bells since I got into the field; others have for even longer. And more and more people are getting concerned, but that’s still not happening fast enough. And I’m still trying to wake the world up. Which is why I’m starting this blog.
This blog is laser-focused on the real AI. The AI of the future is not going to be chatbots, it’s going to be more like a new, alien lifeform. I’ll point out the ways others miss the point when they talk about “AI” — offering empty reassurances based on limitations of current technology, reciting thoroughly refuted arguments for why humans will always be in control, or proposing solutions that will be out-of-date by the time we adopt them. I’ll talk about the realities of current day AI as well, but always with a focus on the big picture — how does this relate to the real AI?
I’ve been an academic AI researcher for 12 years, ever since I heard about deep learning, and thought “Oh shit, this could go all the way.” I’m now an AI professor at the world-leading Quebec AI Institute (Mila). Before that I was a professor at the University of Cambridge.
I’m also (in)famous among AI researchers for warning about the risk AI poses to humanity. I’ve been saying this for a long time. Since before the founding of OpenAI. Since before the first tech company hired the first AI safety researcher. Since before AI could compose a coherent sentence.
I’ve basically heard all the arguments why we shouldn’t worry. They all suck. I’m fed up with people being misled and deceived. People have a right to know what’s coming, and I want to do my best to help everyone understand.