WASHINGTON – A US appeals court on April 8 denied Anthropic’s request to put on hold a move by the Pentagon to label it a supply chain risk, but ordered the artificial intelligence start-up’s legal battle with the Department of War to be put on a fast track.
“On one side is relatively contained risk of financial harm to a single private company,” the three-member appellate panel here reasoned.
“On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.”
The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk – a label typically reserved for organisations from unfriendly foreign countries.
The AI start-up sought a stay of the action in appellate court here and also sued the Department of War in federal court in Northern California.
The appellate panel stated in its ruling that requiring the Department of War to prolong its use of Anthropic AI directly or through contractors “strikes us as a substantial judicial imposition on military operations”.
However, the appeals court agreed that Anthropic raised “substantial challenges” to the sanctions and ordered that proceedings in the underlying case be expedited.
“We’re grateful the court recognised these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful,” an Anthropic spokesperson told AFP.
“While this case was necessary to protect Anthropic, our customers and our partners, our focus remains on working productively with the government to ensure all Americans benefit fro...


1 week ago
50
English (US)