Marc Andreessen, founder of legendary VC firm a16z, recently wrote an article titled 'Why AI Will Save the World'. This post is my reply to him and why I think his understanding of AI is superficial at best. I report quotes from his post in italic for the benefit of the reader, while all credits go to M. Andreessen himself. Please note, I am not an AI expert or developer, I am just passionate about the subject and take AI safety seriously. My day job is investing in private equity across the development cycle of companies.
I invite the reader to read Marc's article in full to form their own views.
----------------------------------------------
I am MASSIVELY bullish on AI, to the point that I think it could make it possible to live almost indefinitely or a very long time (the end of times will eventually get us). While, I fully agree with Marc's optimism, I find his understanding of the risks quite basic and his arguments too simplistic. He dismisses major risks with superficial claims.
"In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will."
"And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation."
"This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous [...] No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear."
"So what happens is the opposite of technology driving centralization of wealth – individual customers of the technology, ultimately including everyone on the planet, are empowered instead, and capture most of the generated value. As with prior technologies, the companies that build AI – assuming they have to function in a free market – will compete furiously to make this happen."
"First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws – I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use."
The truth is that it is simply impossible to stop the development of AI and the upside is so extremely positive, that it would be unwise to do so. Promoting the upside is good, but belittling the downside is plainly stupid (with all due respect to Marc). AI will most likely change our lives and society will take time to adapt, just like the internet.
AI could go rogue, if not at a global scale, at a local scale; I like to think humanity will manage it, like we always have. Intelligent regulation (promoting safety without hindering progress) is good, not bad, and necessary. I agree on his point that it is better for the West to win the AI race (from a Westerner point of view) and this point explains how fundamental and powerful controlling AI will be. It is worrying that someone in his position holds such views: he is either naïve or ignorant about the subject.
Two great (and long) articles which explain in detail the good and the bad about AI in layman's terms and without bias are the following:
I am not saying AI should be banned and I am all for developing it, but it is paramount that it is done in a safe way and we fund companies which take safety issues seriously. Research in interpretability and alignment is key and our progress in these fields is still massively lacking AI models capabilities.
How can humanity develop AI safely? That is a story for another blog and E. Yudkowsky would say that is impossible. Perhaps he is right.