LATEST
Today's top stories at a glance
#news#이란#미국#이스라엘

Original Source

사람이 사라진 2026년 8월4일, 인공지능은 무엇을 하나
📰
사람이 사라진 2026년 8월4일, 인공지능은 무엇을 하나
한겨레 hani.co.kr
🕐 2026년 3월 15일 AM 10:30
Article

Anthropic, US DoD Contract Falls Apart Over AI Safeguards

AI company Anthropic's $200 million AI model supply contract negotiation with the US Department of Defense (DoD) collapsed. This occurred due to conflicting demands over 'safeguards' against civilian surveillance and fully autonomous lethal weapons, which Anthropic deemed 'redlines'.
Sun Mar 15 2026

Anthropic, US DoD Contract Negotiation Fails

On July 26, Dario Amodei, CEO of Anthropic, known for its chatbot Claude, announced the collapse of negotiations for a $200 million AI model supply contract with the US Department of Defense (DoD). The announcement, made amidst speculation that the AI model could be used to support military operations related to Iran, caused significant ripple effects. Anthropic stated it could not accept the DoD's demand to remove 'safeguards,' leading the DoD to label Anthropic a 'supply chain risk' and ban the use of Claude in all military-related work.

AI Ethics and the Question of Humanity's Future

The core issue of the contract breakdown was the US DoD's demand to use Anthropic's AI model for 'all legitimate purposes,' while Anthropic presented two 'redlines.' These redlines stipulated that AI should not be used for mass surveillance of civilians or in fully autonomous lethal weapons. This incident transcends the mere accuracy of artificial intelligence, raising fundamental ethical questions about human alienation in the face of technological advancement. Much like the smart home in Ray Bradbury's short story 'There Will Come Soft Rains,' the basic question of what AI is for in a world without humans is now a pressing reality.

*Source: 한겨레 (2026-03-15)*

Share Facebook X Email

Related Articles