Companies are building confidential computing architectures (“private AI clouds”) to run inference in a way that is private and inaccessible to the companies hosting the infrastructure. Apple, Google, and Meta all have versions of this in production today, and I think OpenAI and Anthropic are likely building this. Private AI...
Companies are building confidential computing architectures (“private AI clouds”) to run inference in a way that is private and inaccessible to the companies hosting the infrastructure. Apple, Google, and Meta all have versions of this in production today, and I think OpenAI and Anthropic are likely building this. Private AI clouds have real privacy benefits for users and security benefits for model weights, but they don’t provide true guarantees in the same way as end-to-end encryption. Users still need to place trust in the hardware manufacturer, third-party network operators, and abuse monitoring systems, among others.
A lawsuit between the New York Times and OpenAI has compelled OpenAI to produce 20 million user chats... (read 2459 more words →)
I saw that both Anthropic and OpenAI publish transparency reports on government requests for user data, which includes FISA/NSLs.