Premium Only Content
Ten AI dangerous you can't ignore
What are the risks of AI? And what AI dangers should we be aware of? Risk Bites looks at ten potential consequences that we should be paying attention to if we want to ensure responsible AI.
While AI may not be on the brink of sentience quite yet, the technology is developing at breakneck speed -- so much so that a group of experts recently called for a pause on "giant AI experiments" until we collectively have a better idea of how to navigate potential risks as we take advantage of the benefits. And while technologies like ChatGPT may still be some way from human-level intelligence (or similar), the emerging risks are very real, and potentially catastrophic if they are not addressed effectively.
This video is intended to provide an initial introduction to some of the more prominent risks -- it doesn't use jargon and it intentionally draws on humor to help make the challenges here understandable and accessible. But this should not diminish the urgency of the challenges here, or the expertise underlying the ideas that are presented.
There is a growing urgency around the need to take a transdisciplinary approach to navigating the risks of AI, and one that draws on expertise from well beyond the confines of computer science and AI development. And while the ten risks may come across as simple in the video, they represent challenges that are stretching the understanding of some of the world's top experts, from technological dependency and job replacement, to algorithmic bias, value misalignment, and heuristic manipulation (the full list is below).
Please do use the video and share it with anyone who may find it useful or helpful. And if you would like a stand alone copy, please reach out to the producer of Risk Bites, Andrew Maynard.
Finally, this is an updated version of a Risk Bites video published in 2018. While the themes have remained the same over the past five years, the pace of development around AI has changed substantially -- including the emergence of large language models and ChatGPT. These are reflected in the updated video.
CONTENTS:
0:00 Introduction
1:18 Technological dependency
1:38 Job replacement and redistribution
1:56 Algorithmic bias
2:15 Non-transparent decision making
2:40 Value-misalignment
2:58 Lethal Autonomous Weapons
3:13 Re-writable goals
3:26 Unintended consequences of goals and decisions
3:47 Existential risk from superintelligence
4:09 Heuristic manipulation
4:28 Responsible AI
There are many other potential risks associated with AI, but as always with risk, the more important questions are associated with the nature, context, type of impact, and magnitude of impact of the risks; together with relevant benefits and tradeoffs.
-
Glenn Greenwald
2 hours agoGOP Senators Demand Tulsi Support Domestic Surveillance To Be Confirmed; Group Tracks IDF War Criminals Around The World; System Pupdate: Pointer's Determination To Survive | SYSTEM UPDATE #387
8.86K9 -
Dr Disrespect
9 hours ago🔴LIVE - DR DISRESPECT - DELTA FORCE - INTENSE SITUATIONS ONLY!
171K23 -
4:01:30
Nerdrotic
6 hours ago $19.35 earnedHollywood National DISASTER! Studios Terrified, Star Wars FAIL | Friday Night Tights 336 w Raz0rfist
85.2K27 -
LIVE
Edge of Wonder
5 hours agoLA Fires: Biblical Inferno as Hollywood Burned Down
829 watching -
12:35
China Uncensored
3 hours agoHas the Coverup Already Begun?
14.1K20 -
1:09:12
The Big Mig™
7 hours agoLet’s Talk Music “Karmageddon” w/ Iyah May
1.11K6 -
1:00:22
Sarah Westall
2 hours agoLoss of Confidence in the Medical System, Real Facts and Data w/ Dr. Michael Schwartz
9772 -
55:08
LFA TV
1 day agoThe Cause of ‘Natural’ Disasters | TRUMPET DAILY 1.10.25 7pm
3.43K4 -
LIVE
2 MIKES LIVE
2 hours ago2 MIKES LIVE #165 Open Mike Friday with Special Surprise Guests!
175 watching -
1:01:18
PMG
1 day ago $0.62 earnedIs the UK Grooming Issue Alive in America & How Are Those DEI Fire Policies Working in CA?
9.48K3