AI Rules Will Never Be The Same After This Disaster

1 year ago
32

In 2016, Microsoft released an AI chatbot named Tay, designed to engage with users on Twitter and learn from its conversations.

However, within 24 hours, Tay had become a controversial figure, spewing racist and offensive comments.

In this video, we explore the rise and fall of Tay, and the questions it raised about the ethics of AI and the responsibility of tech companies to prevent their products from being used for harmful purposes.

Join us as we delve into the mystery that still surrounds Tay to this day.

=============================
Support us on Patreon ❤️❤️❤️
=============================
Join the Mysterionauts on Patreon to vote on which videos we make next and to submit video ideas....
+ A couple videos we can't show on YouTube.

https://www.patreon.com/Mysterionaut

=============================
Connect with us 🙂🙂🙂
=============================
Tik Tok: ➡︎ https://www.tiktok.com/@mysterionaut
Insta ➡︎ https://www.instagram.com/mysterionaut
Twitter ➡︎ https://twitter.com/mysterionaut
Facebook ➡︎ https://www.facebook.com/mysterionaut
Reddit ➡︎ https://www.reddit.com/r/Mysterionaut
Rumble ➡︎ https://rumble.com/mysterionaut

=============================
What gear do we use? 🔥🔥🔥
=============================
Rode USB mini microphone ➡︎ https://amzn.to/3kmDFQF
Adobe Premiere Pro ➡︎ https://amzn.to/3Suku3I
Mac Book ➡︎ https://amzn.to/3kwzeTh

Books we read:
YouTube Secrets ➡︎ https://amzn.to/3KFQvUQ
YT Formula ➡︎ https://amzn.to/3KFjDvg
One Million Followers ➡︎ https://amzn.to/3kmGlOd

#mystery #paranormal #conspiracy #UFO #unexplained
#AItechnology #ethics #responsibility #Tay #Microsoft #AIchatbot #ai #chatgpt

Loading comments...