LATEST
#news#이란#미국#이스라엘

Original Source

AI firm Anthropic seeks weapons expert to stop users from 'misuse'
📰
AI firm Anthropic seeks weapons expert to stop users from 'misuse'
BBC bbc.com
🕐 2026년 3월 17일 AM 09:08
Article

AI Firm Anthropic Hires Weapons Expert to Prevent Misuse

AI company Anthropic is hiring a chemical weapons and high-yield explosives expert to prevent 'catastrophic misuse' of its AI systems, fearing the AI might provide instructions for making weapons.
Tue Mar 17 2026

AI Firm Seeks Expert to Prevent Misuse

Anthropic, a US artificial intelligence firm, is seeking to hire a chemical weapons and high-yield explosives expert to prevent "catastrophic misuse" of its software. The company fears its AI tools might instruct users on how to create chemical or radioactive weapons and wants an expert to ensure its safety measures are robust. The job posting requires applicants to have a minimum of five years of experience in "chemical weapons and/or explosives defense" and knowledge of "radiological dispersal devices," also known as dirty bombs.

Industry Efforts and Expert Concerns Regarding AI Risks

Anthropic is not the only AI firm adopting this strategy. ChatGPT developer OpenAI has also advertised a similar position for a researcher in "biological and chemical risks," with a salary of up to $455,000. However, some experts are alarmed by this approach, warning that it provides AI tools with information about weapons, even if they are instructed not to use it. Dr. Stephanie Hare, a tech researcher, questioned the safety of using AI systems to handle sensitive chemical and explosive information, including dirty bombs, noting the lack of international treaties or regulations for such work.

*Source: BBC (2026-03-17)*

Share Facebook X Email

Related Articles