Is there a reason you say "real AI" instead of "AGI"? Do you see some gap between what we would call AGI and AI that deploys itself?
The term AGI has been confusing to so many people and corrupted/co-opted in so many ways. I will probably write a post about that at some point.
By the time we have "Real AI", the world will look substantially different than it does now. A "Real AI", dropped into today's world, would be enormously transformative and disruptive. But consider a "Kinda Fake AI" which is current AI extrapolated by another year of progress, assuming AI capabilities stay about as "spiky" a year from now as they are now and the spikes a year from now are similar to the spikes now.
I expect we'll get this "Kinda Fake AI" substantially before we get the "Real AI" described in your post. And the "Real AI" will be coming into a world which already has "Kinda Fake AI" and (hopefully) has already largely adapted to it.
I think you're overoptimistic about what will happen in the very near term.
But I agree that as AI gets better and better, we might start to see frictions going away faster than society can keep up (leading to, eg. record unemployment) before we get to real AI.
The "Real AI" name seems like it might get through to people who weren't grokking what the difference between today's AI and the breakthrough would be. At the same time maybe that helps them remember what they're looking for. This reads like an ad to me, in the sense that as a hypothetical person building it, this is the sort of article I'd want to write. Maybe that doesn't make it naturally do the thing my intuition feels like it does, but that's the first thing I think when reading it.
100 percent this. There is this perpetual miscommunication about the word "AGI".
"When I say AGI, I really mean a general intellignintellignence not just a new app or tool."
Sometimes people think that it will take a while for AI to have a transformative effect on the world, because real-world “frictions” will slow it down. For instance:
I think this is basically wrong. Or more specifically: such frictions will be important for AI for the foreseeable future, but not for the real AI.
An example of possible “speed limits” for AI, modified from AI as Normal Technology.
Unlike previous technologies, real AI could smash through barriers to adoption and smooth out frictions so effectively that it’s fair to say real AI could “deploy itself”. And it would create powerful incentives to do so.
I’m not talking about a “rogue AI” that “escapes from the lab” (although I don’t discount such scenarios, either). What I mean is: These frictions can already be smoothed out quite a bit with sufficient expert human labor, it’s just that such labor is quite scarse at the moment. But real AI could be used to implement a cracked team of superhuman scientists, consultants, salespeople, and lobbyists. So (for instance) AI companies could use it to overcome the frictions of experimentation, integration, trust, and regulation.
I mention AI companies, since I think it is easy to imagine something like “what if OpenAI got this tech tomorrow?” But the more general point is that real AI could figure out for itself how it could be deployed smoothly and profitably, and that uptake will be swift because it will provide a big competitive advantage. So it’s likely that would happen, whether through central planning of a big AI company, or through market forces, individual people’s decisions, or even its own volition.
Deployers of real AI would also benefit from some huge advantages that existing AI systems have over humans:
These factors facilitate rapid learning (the copies can share experience) and integration (which mostly comes down to learning a new job quickly). The ability to coordinate at scale could also act as a social super-power, helping to overcome social barriers of trust and regulation by quickly shifting societal perceptions and norms.
Any of these factors could create friction, at least a bit — all but the most rabid “e/acc”s want society to have time to adjust to AI. And human labor can be used to add friction as well as remove it, like when a non-profit shows that a company isn’t playing by the book.
So we can imagine a world where people continue to mistrust AI, and regulation ends up slowing down adoption and preventing real-world experiments needed to teach AI certain skills, since the experiments might be dangerous or unethical. These limitations would create frictions to businesses integrating AI.
I do expect we’ll see more friction as AI becomes more impactful, but I don’t think it will be enough. The social fabric is not designed for advanced AI, and is already strained by information technology like social media. Right now, AI companies get first dibs on their technology and anyone who wants to add friction is playing catch-up. If real AI is developed before society gets serious about putting the breaks on AI, it’ll be too late. Because real AI deploys itself.
Tell me what you think in comments and subscribe to receive new posts!