Premium Only Content
Artificial intelligence could lead to extinction, experts warn - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Artificial intelligence could lead to extinction, experts warn - BBC
Image source, Future Publishing/Getty Images Image caption, A protester outside a London event at which Sam Altman spoke By Chris Vallance Technology reporter Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have warned. Dozens have supported a statement published on the webpage of the Centre for AI Safety. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads. But others say the fears are overblown. Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement. The Centre for AI Safety website suggests a number of possible disaster scenarios: AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons AI-generated misinformation could destabilise society and "undermine collective decision-making" The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship" Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E" Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety's call. Yoshua Bengio, professor of computer science at the university of Montreal, also signed. Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the "godfathers of AI" for their groundbreaking work in the field - for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science. But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that "the most common reaction by AI researchers to these prophecies of doom is face palming". Media caption, Watch: AI 'godfather' Geoffrey Hinton told the BBC earlier this month of AI dangers as he quit Google 'Fracturing reality' Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem. Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: "Current AI is nowhere near capable enough for these risks to materialise. As a result, it's distracted attention away from the near-term harms of AI". Oxford's Institute for Ethics in AI senior research associate Elizabeth Renieris told BBC News she worried more about risks closer to the present. "Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable," she said. They would "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide". Many AI tools essentially "free ride" on the "whole of human experience to date", Ms Renieris said. Many are trained on human-created content, text, art and music they can then imitate - and their creators "have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities". But Centre for AI Safety director Dan Hendrycks told BBC News future risks and present concerns "shouldn't be viewed antagonistically". "Addressing some of the issues today can be useful for addressing many of the later risks tomorrow," he said. Superintelligence efforts Media coverage of the supposed "existential" threat from AI has snowballed since March 2023 when experts, including Tesla boss Elon Musk, signed an open letter urging a halt to the development of the next generation of AI technology. That letter asked if we should "develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us". In contrast, the new campaign has a very short statement, designed to "open up discussion". The statement compares the risk to that posed by nuclear war. In a blog...
-
2:12:46
Price of Reason
13 hours agoAmber Heard BACKS Blake Lively Lawsuit Against Justin Baldoni! Is Disney CEO Bob Iger in TROUBLE?
8.04K5 -
1:01:17
The StoneZONE with Roger Stone
7 hours agoChristmas Edition: Why the Panama Canal is Part of the America First Agenda | The StoneZONE
51.3K21 -
LIVE
LFA TV
18 hours agoLFA TV CHRISTMAS EVE REPLAY
1,106 watching -
4:33:48
tacetmort3m
1 day ago🔴 LIVE - THE ZONE KEEPS PULLING ME BACK - STALKER 2 - PART 15
44.3K12 -
22:45
Brewzle
15 hours agoI Went Drinking In A Real Bourbon Castle
29.4K3 -
48:36
PMG
1 day ago $2.16 earned"Parkland Parent Speaks Out On Kamala Harris Using Victims"
22.4K3 -
4:06
The Lou Holtz Show
13 hours agoCoach Lou Holtz’s Heartfelt Christmas Message 🎄 | Family, Faith & Notre Dame Spirit 💚 #christmas
16.8K -
51:35
Dr Steve Turley
1 day ago $18.37 earnedROSEANNE BARR - Her Journey, TRUMP, and the MAGA GOLDEN AGE! [INTERVIEW]
52.1K51 -
57:38
The Tom Renz Show
11 hours agoMerry Christmas - The Tom Renz Show Christmas
89.6K17 -
2:59:10
Wendy Bell Radio
22 hours agoThe Bridge Too Far
167K301