This is a cross-post from my Substack, Clear-Eyed AI. If you want my future articles sent to you, you can subscribe for free there. ~~~~ Superintelligence might kill everyone on Earth. At least, that’s what the three most-cited AI scientists of all time believe.1 Less clear is how this might...
Volkswagen’s 2014 Jetta was designed for an unusual purpose: deception. Ordinarily, the Jetta polluted at nearly 40x the legal limit. But its developers cheated at emissions testing, programming the car to notice the tests and to temporarily stop polluting. When measured, the Jetta appeared legal.1 The AI industry now has...
I wanted to refer back to OpenAI's recent podcast episode on economic impacts, so I created a transcript. The episode features their Chief Economist Ronnie Chatterji and their Chief Operating Officer Brad Lightcap, interviewed by former OpenAI employee Andrew Mayne. I hope others find this useful as well. For a...
A few days ago I wrote a shortform wondering if anyone had done a breakdown of the different state-level AI bills that had been proposed. People seemed interested, and so I ended up doing the analysis and writing up my findings. This is the beginning of the piece, which is...
A crisis simulation changed how I think about AI risk Or: My afternoon as a rogue artificial intelligence A dozen of us sit around a conference room table. Directly across from me are the US President and his Chief of Staff. To my left is a man who’s a bit...
Some competitions have a clear win condition: In a race, be the first to cross a finish line. The US-China AI competition isn’t like this. It’s not enough to be the first to get a powerful AI system. So, what is necessary for a good outcome from the US-China AI...
Is ChatGPT actually fixed now? I led OpenAI’s “dangerous capability” testing. Want to know if ChatGPT can trick users into accepting insecure code? Or persuade users to vote a certain way? My team built tests to catch this. Testing is today’s most important AI safety process. If you can catch...