Premium Only Content
The 'AI Apocalypse' Is Just PR - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The 'AI Apocalypse' Is Just PR - The Atlantic
Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused. Illustration by Joanne Imperio / The Atlantic On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving. Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates against mass surveillance, told me. The unstated assumption underlying the “extinction” fear is that AI is destined to become terrifyingly capable, turning these companies’ work into a kind of eschatology. “It makes the product seem more powerful,” Emily Bender, a computational linguist at the University of Washington, told me, “so powerful it might eliminate humanity.” That assumption provides a tacit advertisement: The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest. It’s also a posture that aims to inoculate them from criticism, copying the crisis communications of tobacco companies, oil magnates, and Facebook before: Hey, don’t get mad at us; we begged them to regulate our product. Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me. Programs such as GPT-4 have improved on their previous iterations, but only incrementally. AI may well transform important aspects of everyday life—perhaps advancing medicine, already replacing jobs—but there’s no reason to believe that anything on offer from the likes of Microsoft and Google would lead to the end of civilization. “It’s just more data and parameters; what’s not happening is fundamental step changes in how these systems work,” Whittaker said. Two weeks before signing the AI-extinction warning, Altman, who has compared his company to the Manhattan Project and himself to Robert Oppenheimer, delivered to Congress a toned-down version of the extinction statement’s prophecy: The kinds of AI products his company develops will improve rapidly, and thus potentially be dangerous. Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.” But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do. The divide, then, is not over whether AI is harmful, but which harm is most concerning—a future AI cataclysm only its architects are warning about and claim they can uniquel...
-
14:45
Mrgunsngear
1 day ago $136.48 earnedFletcher Rifle Works Texas Flood 30 Caliber 3D Printed Titanium Suppressor Test & Review
127K27 -
17:17
Lady Decade
1 day ago $10.67 earnedMortal Kombat Legacy Kollection is Causing Outrage
83.8K16 -
35:51
Athlete & Artist Show
1 day ago $14.16 earnedIs Ryan Smith The Best Owner In The NHL?
95K13 -
22:56
American Thought Leaders
2 days agoCharles Murray: I Thought Religion Was Irrelevant to Me. I Was Wrong.
77.7K39 -
36:22
Brad Owen Poker
16 hours agoGIGANTIC $17,000+ Pot In BOBBY’S ROOM! TRAPPING Top Pro w/FULL HOUSE!! Big Win! Poker Vlog Ep 326
81.4K9 -
3:53
GreenMan Studio
1 day agoRUMBLE RUNDOWN: DREAM HACK SPECIAL W/Greenman Reports
65.5K13 -
1:28
Damon Imani
2 days agoThey Laughed at Trump’s Cognitive Test — Damon Made Them REGRET It!
62.4K41 -
9:14
Freedom Frontline
1 day agoAdam Schiff PANICS As Eric Schmitt Exposes His Dirty Lies LIVE
42.3K86 -
10:32
GBGunsRumble
1 day agoGBGuns Armory Ep 153 Adler Arms AD-9`
28.1K2 -
35:53
Degenerate Plays
15 hours ago $1.27 earnedRuckus Randy And Repair Ronald (Socks On) - Call of Duty: Modern Warfare 2 (2009) : Part 7
18.6K2