Premium Only Content
Anthropic’s Claude Is Competing With ChatGPT. Even Its Builders Fear AI. - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Anthropic’s Claude Is Competing With ChatGPT. Even Its Builders Fear AI. - The New York Times
It’s a few weeks before the release of Claude, a new A.I. chatbot from the artificial intelligence start-up Anthropic, and the nervous energy inside the company’s San Francisco headquarters could power a rocket. At long cafeteria tables dotted with Spindrift cans and chessboards, harried-looking engineers are putting the finishing touches on Claude’s new, ChatGPT-style interface, code-named Project Hatch. Nearby, another group is discussing problems that could arise on launch day. (What if a surge of new users overpowers the company’s servers? What if Claude accidentally threatens or harasses people, creating a Bing-style P.R. headache?) Down the hall, in a glass-walled conference room, Anthropic’s chief executive, Dario Amodei, is going over his own mental list of potential disasters. “My worry is always, is the model going to do something terrible that we didn’t pick up on?” he says. Despite its small size — just 160 employees — and its low profile, Anthropic is one of the world’s leading A.I. research labs, and a formidable rival to giants like Google and Meta. It has raised more than $1 billion from investors including Google and Salesforce, and at first glance, its tense vibes might seem no different from those at any other start-up gearing up for a big launch. But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things. Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us. “Some of us think that A.G.I. — in the sense of systems that are genuinely as capable as a college-educated person — are maybe five to 10 years away,” said Jared Kaplan, Anthropic’s chief scientist. Just a few years ago, worrying about an A.I. uprising was considered a fringe idea, and one many experts dismissed as wildly unrealistic, given how far the technology was from human intelligence. (One A.I. researcher memorably compared worrying about killer robots to worrying about “overpopulation on Mars.”) But A.I. panic is having a moment right now. Since ChatGPT’s splashy debut last year, tech leaders and A.I. experts have been warning that large language models — the A.I. systems that power chatbots like ChatGPT, Bard and Claude — are getting too powerful. Regulators are racing to clamp down on the industry, and hundreds of A.I. experts recently signed an open letter comparing A.I. to pandemics and nuclear weapons. At Anthropic, the doom factor is turned up to 11. A few months ago, after I had a scary run-in with an A.I. chatbot, the company invited me to embed inside its headquarters as it geared up to release the new version of Claude, Claude 2. I spent weeks interviewing Anthropic executives, talking to engineers and researchers, and sitting in on meetings with product teams ahead of Claude 2’s launch. And while I initially thought I might be shown a sunny, optimistic vision of A.I.’s potential — a world where polite chatbots tutor students, make office workers more productive and help scientists cure diseases — I soon learned that rose-colored glasses weren’t Anthropic’s thing. They were more interested in scaring me. In a series of long, candid conversations, Anthropic employees told me about the harms they worried future A.I. systems could unleash, and some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history. (“The Making of the Atomic Bomb,” a 1986 history of the Manhattan Project, is a popular book among the company’s employees.) Not every conversation I had at Anthropic revolved around existential risk. But dread was a dominant theme. At times, I felt like a food writer who was assigned to cover a trendy new restaurant, only to discover that the kitchen staff wanted to talk about nothing but food poi...
-
1:15:40
Man in America
13 hours agoThe DISTURBING Truth: How Seed Oils, the Vatican, and Procter & Gamble Are Connected w/ Dan Lyons
100K84 -
6:46:07
Rance's Gaming Corner
14 hours agoTime for some RUMBLE FPS!! Get in here.. w/Fragniac
150K2 -
1:30:48
Josh Pate's College Football Show
14 hours ago $8.94 earnedCFP Reaction Special | Early Quarterfinal Thoughts | Transfer Portal Intel | Fixing The Playoff
79.3K -
23:55
CartierFamily
3 days agoElon & Vivek TRIGGER Congress as DOGE SHUTS DOWN Government
127K152 -
5:43:44
Scammer Payback
2 days agoCalling Scammers Live
215K28 -
18:38
VSiNLive
2 days agoProfessional Gambler Steve Fezzik LOVES this UNDERVALUED Point Spread!
155K20 -
LIVE
Right Side Broadcasting Network
10 days agoLIVE REPLAY: President Donald J. Trump Keynotes TPUSA’s AmFest 2024 Conference - 12/22/24
2,599 watching -
4:31
CoachTY
1 day ago $28.72 earnedCOINBASE AND DESCI !!!!
193K13 -
10:02
MichaelBisping
1 day agoBISPING: "Was FURY ROBBED?!" | Oleksandr Usyk vs Tyson Fury 2 INSTANT REACTION
113K16 -
8:08
Guns & Gadgets 2nd Amendment News
2 days ago16 States Join Forces To Sue Firearm Manufacturers Out of Business - 1st Target = GLOCK
132K93