Ads, Incentives, and Destiny
There’s been some recent unpleasantness regarding Anthropic's Super Bowl ads. To recap: * OpenAI started showing ads in some tiers of ChatGPT. * Anthropic made some Super Bowl ads making fun of ads in AI. * Sam Altman got mad about Anthropic’s ads. If you haven't already, you should watch one of the ads—they’re very good. Even Sam laughed, right before he got mad about it. Anthropic’s ads are a lot of fun, but they aren’t completely fair: they implicitly target OpenAI, but show ads that are far worse than what OpenAI is actually doing. But fair or not, they raise a valid concern. Death, taxes, and enshittification Let me be clear: OpenAI’s ad policy is thoughtful and ethical and I have no problem with it. If OpenAI rigorously adheres to this policy in the long run I’ll be surprised, delighted, and contrite. Did I mention that I’d be surprised if OpenAI holds the line? Because I would be quite surprised. The tech industry is littered with companies that began with clear, ethical boundaries about ads, but slowly evolved into user-hostile rent-taking machines. The problem is not that ads are intrinsically bad, but that in certain tech products, the nature of the advertising business creates almost irresistible perverse incentives. Google was once the canonical example of an ethical tech company. Their motto in those days was “don’t be evil”, and they weren’t. They had a great product that was a delight to use, and their ads were clearly marked as ads, in accordance with a thoughtful and ethical policy much like OpenAI’s new policy. Google was one of the best things about the internet, and they were committed to doing the right thing. But the Ring of Power has a will of its own… Slowly but inexorably, Google began to change. It turned out that it was possible to make more money per search by showing more ads, and so there were more ads. And people clicked on ads more often if the ads looked more like organic search results, so it became harder and harder to t