Original Source
AI Firm Anthropic Hires Weapons Expert to Prevent Misuse
AI Firm Seeks Expert to Prevent Misuse
Anthropic, a US artificial intelligence firm, is seeking to hire a chemical weapons and high-yield explosives expert to prevent "catastrophic misuse" of its software. The company fears its AI tools might instruct users on how to create chemical or radioactive weapons and wants an expert to ensure its safety measures are robust. The job posting requires applicants to have a minimum of five years of experience in "chemical weapons and/or explosives defense" and knowledge of "radiological dispersal devices," also known as dirty bombs.
Industry Efforts and Expert Concerns Regarding AI Risks
Anthropic is not the only AI firm adopting this strategy. ChatGPT developer OpenAI has also advertised a similar position for a researcher in "biological and chemical risks," with a salary of up to $455,000. However, some experts are alarmed by this approach, warning that it provides AI tools with information about weapons, even if they are instructed not to use it. Dr. Stephanie Hare, a tech researcher, questioned the safety of using AI systems to handle sensitive chemical and explosive information, including dirty bombs, noting the lack of international treaties or regulations for such work.
*Source: BBC (2026-03-17)*




