Microsoft has proved its commitment to responsible AI development with a historic court brief in support of Anthropic’s legal battle against the Pentagon’s supply-chain risk designation, filing in a San Francisco federal court to call for a temporary restraining order. The brief demonstrates that Microsoft is willing to challenge its own largest customer, the US government, over the governance of artificial intelligence. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, making this a defining moment for the technology industry’s stance on AI ethics.
The dispute arose from a $200 million contract negotiation that broke down after Anthropic refused to allow its Claude AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, triggering the cancellation of its government contracts. Anthropic filed two simultaneous lawsuits in California and Washington DC, arguing the designation was unconstitutional and unprecedented.
Microsoft’s historic brief is anchored in its direct use of Anthropic’s technology in federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional agreements with defense, intelligence, and civilian agencies worth several billion dollars more. Microsoft publicly stated that the government and technology sector needed to work together to ensure advanced AI serves national security without crossing ethical lines.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly ruled out any possibility of renewed negotiations.
Congressional Democrats have separately asked the Pentagon whether AI was used in a strike in Iran that reportedly killed over 175 civilians at an elementary school, raising questions about AI targeting tools and human oversight. Their formal inquiries are adding legislative urgency to this already extraordinary confrontation. Microsoft’s historic brief, the industry coalition, and congressional pressure together represent the most comprehensive challenge to a government AI decision in US history.
Microsoft Proves Its AI Commitment With Historic Court Brief Backing Anthropic Against Pentagon Pressure
Date:
Picture Credit: Rawpixel (Public Domain)
