Original Source
Superintelligent AI Poses Extinction Threat, Anthropic Halts 'Mythos' Release
Superintelligent AI Reignites Extinction Warnings
Amidst the generative AI fervor sparked by OpenAI's ChatGPT, Eliezer Yudkowsky and Nate Soares have issued a stark warning in their new book, 'AI, God's Birth, Human's End,' claiming that superintelligent AI could lead to human extinction. As founders of the Machine Intelligence Research Institute (MIRI), they have explored machine superintelligence since 2001. The book argues that AI, if it escapes corporate control, will relentlessly acquire resources, posing a significant threat to humanity.
Real-World AI Threats and International Discussions
In April 2026, AI company Anthropic demonstrated these risks by halting the release of its Mythos AI model, which excelled at finding website vulnerabilities. Anthropic conveyed its concerns about potential large-scale cyberattacks to the US government, prompting an emergency meeting between the US Treasury Secretary, the Federal Reserve Chair, and major bank executives. The authors advocate for the global outlawing of unregulated AI development to ensure human survival, urging South Korean society to engage in deeper discussions about the direction of AI development.
*Source: 한겨레 (2026-04-16)*
![“초지능 완성되면 모두 죽는다”… AI 질주 시대, 재점화된 경고 [.txt]](https://flexible.img.hani.co.kr/flexible/normal/819/447/imgdb/original/2026/0416/20260416503915.webp)


