This is (instinctively) quite frightening, particularly the "no we NEED to spy on American citizens at scale, and we NEED killer robots." On the other hand... I guess it shows that the government isn't always and everywhere in the pocket of AI developers? Or maybe it just further shows that the government is in the pocket of certain other AI developers...
Reportedly xAI, OpenAI, and DeepMind are already in discussions with the Pentagon to replace Anthropic. I wonder if Elon's recent misogynistic outburst against Amanda Askell is related or just serendipitous.
Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.
AI models for autonomous weapons are quite different from off-the-shelf LLMs.
Question: Is Claude only being used as a chatbot/research agent at the Pentagon? Or is there some intent to connect it to APIs for conducting mass surveillance or operating autonomous weapons? Is there some project to embed Claude in military robotic systems, like Project Fetch or something similar?
The article says it's used mostly for bureaucratic functions, so this seems unlikely. Is there something classified we don't know about? Or is this just another culture war issue, i.e. Claude is too "woke" for the Pentagon?
Claude is used inside of Palantir. From the Palantir press release from 2024:
“Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions," said Shyam Sankar, Chief Technology Officer, Palantir. “Palantir is proud to be the first industry partner to bring Claude models to classified environments. We’ve already seen firsthand the impact of these models with AIP in the commercial sector: for example, one leading American insurer automated a significant portion of their underwriting process with 78 AI agents powered by AIP and Claude, transforming a process that once took two weeks into one that could be done in three hours. We are now providing this same asymmetric AI advantage to the U.S. government and its allies."
The Anthropic exception policy mentions that there's that in at least one of their secret agreements, they gave Claude outside of the standard ToS, it gave an expectation to use Claude for foreign surveillance. This suggests this installation is plugged into APIs for foreign mass surveillance.
I would expect that the people doing disinformation campaigns in the US military like they one they did spreading vaccine misinformation in the Philippines in 2020-2021 as part of an attempt to counter China, would find an LLM helpful and like to use it. While we don't know what the anti-China information operations currently do, but I find it unlikely that they got shut down. We don't know whether currently a secret classified agreement and/or actual use of it for such information operations currently exist.
The parts of the military doing offensive cyber attacks might also want exceptions to do so and we don't know the current status of that either.
Drawing the red lines at domestic mass surveillance and autonomous killing machines might be:
Okay, we will give you the ability to do those things that you actually want to do, but to save face publicly we have some red lines about things you don't want to do anyway.
But this doesn't seem to be enough for Hegseth who sees it as a matter of principle. Maybe, Hegseth thinks it's an important principle for negotiating the contract with other AI companies as well to allow all lawful uses.
Shortly before Moltbook was created, I was thinking of writing an AI-takeover story, to be called "Claude versus Trump". Looks like I was too slow...
I wonder if the prospect of war with Iran is driving Hegseth to resolves this issue now.
Suppose that Anthropic trained a Claude without those specific guardrails, but refused to certify that it was suitable for any purpose, and refused to certify that it had been trained as the Pentagon wanted. What would the Pentagon do? Would they trust a certification that was extracted under duress?
This summary looks quite shallow he-said-she-said journalism. It seems very likely that the Maduro raid questions where just a pretext and the actual issue that sparked the conflict is classified.
It also fails to explain why Claude is currently the only model that's used. There's a classified installation in AWS that contains both Palantir and Claude. xAI, Google and OpenAI are likely not that keen on having their models run on AWS instead of their own infrastructure.