AI Won't Really Kill Us All, Will It? - The Atlantic

9 months ago
121

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

AI Won't Really Kill Us All, Will It? - The Atlantic

For months, more than a thousand researchers and technology experts involved in creating artificial intelligence have been warning us that they’ve created something that may be dangerous, something that might eventually lead humanity to become extinct. In this Radio Atlantic episode, The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel talk about how seriously we should take these warnings, and what else we might consider worrying about. Listen to the conversation here: Subscribe here: Apple Podcasts Spotify Stitcher Google Podcasts Pocket Casts The following transcript has been edited for clarity. Hanna Rosin: I remember when I was a little kid being alone in my room one night watching this movie called The Day After. It was about nuclear war, and for some absurd reason, it was airing on regular network TV. The Day After: Denise: It smells so bad down here. I can’t even breathe! Denise’s mom: Get ahold of yourself, Denise. Rosin: I particularly remember a scene where a character named Denise—my best friend’s name was Denise—runs panicked out of her family’s nuclear-fallout shelter. The Day After: Denise: Let go of me. I can’t see! Mom: You can’t go! Don’t go up there! Brother: Wait a minute! Rosin: It was definitely, you know, “extra.” Also, to teenage me, genuinely terrifying. It was a very particular blend of scary ridiculousness I hadn’t experienced since—until a couple of weeks ago, when someone sent me a link to this YouTube video with Paul Christiano, who is an artificial intelligence researcher. Paul Christiano: The most likely way we die is not that AI comes out of the blue and kills us, but involves that we’ve deployed AI everywhere. And if, God forbid, they were trying to kill us, they would definitely kill us. Rosin: Christiano was talking on this podcast called Bankless. And then I started to notice other major AI researchers saying similar things: Norah O’Donnell on CBS News: More than 1,300 tech scientists, leaders, researchers, and others are now asking for a pause. Bret Baier on Fox News: Top story right out of a science-fiction movie. Rodolfo Ocampo on 7NEWS Australia: Now it’s permeating the cognitive space. Before, it was more the mechanical space. Michael Usher on 7NEWS Australia: There needs to be at least a six-month stop on the training of these systems. Fox News: Contemporary AI systems are now being human-competitive. Yoshua Bengio talking with Tom Bilyeu: We have to get our act together. Eliezer Yudkowsky on the Bankless podcast: We’re hearing the last winds begin to blow, the fabric of reality start to fray. Rosin: And I’m thinking, Is this another campy Denise moment? Am I terrified? Is it funny? I can’t really tell, but I do suspect that the very “doomiest” stuff at least is a distraction. There are likely some actual dangers with AI that are less flashy but maybe equally life-altering. So today we’re talking to The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel, who’ve been researching and tracking AI for some time. ___ Rosin: Charlie, Adrienne—when these experts are saying, “Worry about the extinction of humanity,” what are they actually talking about? Adrienne LaFrance: Let’s game out the existential doom, for sure. [Laughter.] Rosin: Thanks! LaFrance: When people warn about the extinction of humanity at the hands of AI, that’s literally what they mean—that all humans will be killed by the machines. It sounds very sci-fi. But the nature of the threat is that you imagine a world where more and more we rely on artificial intelligence to complete tasks or make judgments that previously were reserved for humans. Obviously, humans are flawed. The fear assumes a moment at which AI’s cognitive abilities eclipse our species—and so all of a sudden, AI is really in charge of the biggest and most consequential decisions that humans make. You can imagine they’re making decisions in wartime about when to deploy nuclear weapons—and you could very easily imagine how that could go sideways. Rosin: Wait; but I can’t very easily imagine how that would go sideways. First of all, wouldn’t a human put in many checks before you would give access to a machine? LaFrance: Well, one would hope. But one example would be that you give the AI the imperative to “Win this war, no matter what...

Loading comments...