Artificial intelligence has leapt from speculative theory to everyday tool with astonishing speed, promising breakthroughs in science, medicine, and the ways we learn, live, and work. But to some of its earliest researchers, the race toward superintelligence represents not progress but an existential threat, one that could end humanity as we know it.
Eliezer Yudkowsky and Nate Soares, authors of If Anyone Builds It, Everyone Dies, join Oren to debate their claim that pursuing AI will end in human extinction. During the conversation, a skeptical Oren pushes them on whether meaningful safeguards are possible, what a realistic boundary between risk and progress might look like, and how society should judge the costs of stopping against the consequences of carrying on.










