Premium Only Content

How MIT Is Teaching AI to Avoid Toxic Mistakes
MIT’s novel machine learning method for AI safety testing utilizes curiosity to trigger broader and more effective toxic responses from chatbots, surpassing previous red-teaming efforts.
A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.
To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.
-
LIVE
The Jimmy Dore Show
1 hour agoTrump's WEF Plan To End Gaza War! Fox News’ UNHINGED Pro-War Propaganda! w/ Christine Anderson
4,816 watching -
LIVE
Nerdrotic
2 hours agoMarvel's IronSHART! Not Another Hollywood Bailout?! FNT Is Under ATTACK! | Friday Night Tights 360
2,623 watching -
LIVE
Sarah Westall
13 minutes agoSocrates: War to Expand Globally in 2026, Approx 0% Chance Ceasefire will Hold w/ Martin Armstrong
557 watching -
LIVE
Dr Disrespect
5 hours ago🔴LIVE - DR DISRESPECT - TOP 10 HOTTEST DEMOS OF 2025
2,475 watching -
LIVE
LFA TV
20 hours agoLFA TV ALL DAY STREAM - FRIDAY 6/27/25
1,360 watching -
1:32:14
vivafrei
2 hours agoSCOTUS Rules on Nation-Wide Injunctions! Gavin Newsom SUES Fox! Fake News Gonna Fake & MORE!
119K20 -
LIVE
Barry Cunningham
5 hours agoPRESIDENT TRUMP LIVE PRESS CONFERENCE!
1,204 watching -
LIVE
Badlands Media
10 hours agoMAHA News Ep. 48 - RFK Defunds GAVI, Bans Mercury Vax Preservatives, Insurance Reform
490 watching -
LIVE
StoneMountain64
4 hours agoIt's Rocket League but with HUMANS. This should be a sport.
364 watching -
1:05:23
Crypto Power Hour
6 hours ago $0.30 earnedP2P Trading, How To Stay Safe & Secure
12.4K2