Why I Signed The Amicus Brief for Anthropic v Department of War
On Monday, Anthropic filed a lawsuit against the Department of War, and an amicus brief in support of Anthropic was filed on behalf of a number of OpenAI and Google employees. See coverage here and the brief itself here. To emphasize, everyone who signed did so in a personal capacity. There’s also an amicus brief filed on behalf of Microsoft. That one is speaking for the company, focusing on a temporary restraining order of the supply chain risk designation.
I don’t plan to write more about this subject, but will briefly explain why I signed.
In many ways, I thought the fight between Anthropic and the government was one of the more important stories in the world. There’s conflicting reporting, but very broadly, Anthropic signed an agreement with the government to deploy Claude in classified, military contexts. There was then a falling out. According to some, this started because Anthropic asked questions about if their model was used in the Maduro raid. According to others, the conflict came from the government asking a hypothetical about automated defense missiles, and not liking Anthropic’s answers. The leaked Anthropic memo says the negotiations fell apart because Anthropic refused to delete a phrase about using AI in “analysis of bulk acquired data”, covering information obtained by third-party data brokers. Whatever the reality was, the relevant point is that the government no longer liked the deal with Anthropic, and tried to get Anthropic to agree to an updated contract with weaker red line protections. Anthropic said no, Pete Hegseth declared them a supply chain risk, and Anthropic filed a lawsuit against this.
Now, personally, I’m in favor of red lines on domestic surveillance and fully autonomous weapons. You may read the amicus brief if you’re curious why (it’s what I did before signing), but a short version is that I believe domestic surveillance is currently limited by the friction required to do a full collation of all info the US government has on a given citizen. AI tooling could heavily reduce this friction and create a trivially directable surveillance apparatus much stronger than the current one. I am personally okay with a less efficient government, when that loss of efficiency is in the name of preserving civil liberties, rather than dumb reasons like outdated software. As for fully autonomous weapons, I don’t want them to exist at all. But even if you assume they should exist (and this is a big assumption), the reliability of AI is not high enough for them. The only exception I can think of right now is situations where a missile is already in the air, and your options are to autonomously shoot it down or fail to react in time at all.
Separate from the debate you can have over how AI should be used, I also understand that the US government has the right to decide the contracts it agrees to, drop them if they no longer fit, and that usage restrictions which could hypothetically cede operational control may not be something the government wants. See Dean Ball’s thoughts on the subject here, the relevant point being:
The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable.
Exiting the contract was fine, but the step of declaring Anthropic a supply chain risk was irrational and way too big of an overreach. It’s just an exceptionally retributive action which helps nobody. Not the military, not Anthropic, and not the people. By making this move, the Department of War claims the power to attack a US tech company’s business and force the tech industry to divest from said company, just because the government doesn’t like them. It’s incoherent and baffling for the US government to promote AI on one hand, and meddle in the free market against a US AI company on the other.
Generally, I’m not wired for political debates. I don’t relish it in the way that Twitter people do. I’m also aware this could have repercussions on either my career or the potential future paths open to me. (To be clear, I don’t expect it to affect much of either, but it was something I considered.) In general, I am a fan of building career capital and preserving optionality. But if you never spend it on things you believe in, what’s the point?
The amicus brief was broadly aligned with my thoughts on the matter, so I signed.