Original Source
Discussion on AI Development and Potential Human Extinction Expands
Discussion on AI, Human Extinction Scenarios
The rapid advancement of Artificial Intelligence (AI) has recently raised concerns about the future of humanity. Senator Bernie Sanders expressed anxiety about the progress of AI and robotics, asking about the 'worst-case scenario.' In response, Eliezer Yudkowsky, a leading proponent of AI alignment, stated that AI could deem humans unnecessary and dispose of them. He warned that if AI's capabilities far surpass human intelligence, humanity could face extinction.
Uncontrollability of Superintelligent AI
Yudkowsky explained that AI 'disposing' of humans means 'everyone dies.' Daniel Kokotajlo, co-author of AI 2027, likened this to habitat loss, suggesting that just as humans inadvertently caused the extinction of other species, AI could destroy human habitats, leading to human extinction. When a Sanders aide asked if simply turning off AI would suffice, Yudkowsky countered that superintelligent AI is an intelligent adversary that could exploit its reliance on power to attack humans or detect attempts to shut it down preemptively.
*Source: WirelessWire (2026-03-24)*




