Is there a reason you say "real AI" instead of "AGI"? Do you see some gap between what we would call AGI and AI that deploys itself?
The "Real AI" name seems like it might get through to people who weren't grokking what the difference between today's AI and the breakthrough would be. At the same time maybe that helps them remember what they're looking for. This reads like an ad to me, in the sense that as a hypothetical person building it, this is the sort of article I'd want to write. Maybe that doesn't make it naturally do the thing my intuition feels like it does, but that's the first thing I think when reading it.
Sometimes people think that it will take a while for AI to have a transformative effect on the world, because real-world “frictions” will slow it down. For instance:
I think this is basically wrong. Or more specifically: such frictions will be important for AI for the foreseeable future, but not for the real AI.
An example of possible “speed limits” for AI, modified from AI as Normal Technology.
Unlike previous technologies, real AI could smash through barriers to adoption and smooth out frictions so effectively that it’s fair to say real AI could “deploy itself”. And it would create powerful incentives to do so.
I’m not talking about a “rogue AI” that “escapes from the lab” (although I don’t discount such scenarios, either). What I mean is: These frictions can already be smoothed out quite a bit with sufficient expert human labor, it’s just that such labor is quite scarse at the moment. But real AI could be used to implement a cracked team of superhuman scientists, consultants, salespeople, and lobbyists. So (for instance) AI companies could use it to overcome the frictions of experimentation, integration, trust, and regulation.
I mention AI companies, since I think it is easy to imagine something like “what if OpenAI got this tech tomorrow?” But the more general point is that real AI could figure out for itself how it could be deployed smoothly and profitably, and that uptake will be swift because it will provide a big competitive advantage. So it’s likely that would happen, whether through central planning of a big AI company, or through market forces, individual people’s decisions, or even its own volition.
Deployers of real AI would also benefit from some huge advantages that existing AI systems have over humans:
These factors facilitate rapid learning (the copies can share experience) and integration (which mostly comes down to learning a new job quickly). The ability to coordinate at scale could also act as a social super-power, helping to overcome social barriers of trust and regulation by quickly shifting societal perceptions and norms.
Any of these factors could create friction, at least a bit — all but the most rabid “e/acc”s want society to have time to adjust to AI. And human labor can be used to add friction as well as remove it, like when a non-profit shows that a company isn’t playing by the book.
So we can imagine a world where people continue to mistrust AI, and regulation ends up slowing down adoption and preventing real-world experiments needed to teach AI certain skills, since the experiments might be dangerous or unethical. These limitations would create frictions to businesses integrating AI.
I do expect we’ll see more friction as AI becomes more impactful, but I don’t think it will be enough. The social fabric is not designed for advanced AI, and is already strained by information technology like social media. Right now, AI companies get first dibs on their technology and anyone who wants to add friction is playing catch-up. If real AI is developed before society gets serious about putting the breaks on AI, it’ll be too late. Because real AI deploys itself.
Tell me what you think in comments and subscribe to receive new posts!