Original Source
Anthropic, US DoD Contract Falls Apart Over AI Safeguards
Anthropic, US DoD Contract Negotiation Fails
On July 26, Dario Amodei, CEO of Anthropic, known for its chatbot Claude, announced the collapse of negotiations for a $200 million AI model supply contract with the US Department of Defense (DoD). The announcement, made amidst speculation that the AI model could be used to support military operations related to Iran, caused significant ripple effects. Anthropic stated it could not accept the DoD's demand to remove 'safeguards,' leading the DoD to label Anthropic a 'supply chain risk' and ban the use of Claude in all military-related work.
AI Ethics and the Question of Humanity's Future
The core issue of the contract breakdown was the US DoD's demand to use Anthropic's AI model for 'all legitimate purposes,' while Anthropic presented two 'redlines.' These redlines stipulated that AI should not be used for mass surveillance of civilians or in fully autonomous lethal weapons. This incident transcends the mere accuracy of artificial intelligence, raising fundamental ethical questions about human alienation in the face of technological advancement. Much like the smart home in Ray Bradbury's short story 'There Will Come Soft Rains,' the basic question of what AI is for in a world without humans is now a pressing reality.
*Source: 한겨레 (2026-03-15)*




