AI Says We're All Doomed 🪙 Experts Agree | Negative Impact On Our Lives If We Don't Change

12 days ago
51

Artificial intelligence has proposed that we consider potential strategies to mitigate any adverse effects on our lives if we continue in the same manner. Additionally, several experts have expressed similar concerns.

It is therefore pertinent to enquire as to the degree of accuracy of the AI's predictions. The rationale presented by the AI in question is consistent with the perspectives expressed by the experts in the video, given that their work likely constituted a significant portion of the training data utilized by the AI. The individuals in question include Hinton (Turing winner), Sutskever (most cited computer scientist), Tegmark (MIT professor), and Russell (author of a key AI textbook). All have issued stark warnings, though it is possible that when Hinton states (in the video) that we are unlikely to succeed, he is attempting to prompt us to alter the result. Similarly, Sutskever also resigned in order to prioritize the safety of others.

Hinton and Sutskever observe that AI is not merely a predictive tool; it is also a means of constructing a comprehensive understanding of the world and reasoning with it. This understanding often uncovers novel insights by making novel connections with existing data.
This does not imply that the AI's predictions are well calculated. The opacity of the AI makes it challenging to assess its performance. It is my hope that this will result in greater attention being paid to the expert warnings.

On a positive note, as the AI experts and professionals posit, there is a possibility of a promising future if the general public is sufficiently aware of the potential dangers in time. I would like to express my gratitude for your assistance in the form of likes, comments, and other forms of engagement. Additionally, it is recommended that readers consider subscribing to Ground News, which offers a unique perspective on current events by highlighting the influence of media bias.

Source of information:
ground.news/digitalengine

MIT Professor Max Tegmark on AI risk and interpretability
youtube.com/watch?v=peDRcu9GwyU&t=0s

OpenAI developing AI agents with GPT-5
businessinsider.com/openai-launch-better-gpt-5-chatbot-2024-3

OpenAI dissolves super alignment team after chief scientist Sutskever’s exit.
bloomberg.com/news/articles/2024-05-17/openai-dissolves-key-safety-team-after-chief-scientist-ilya-sutskever-s-exit?embedded-checkout=true

OpenAI didn’t keep promises made to its AI safety team, report says.
qz.com/openai-superalignment-team-compute-power-ilya-sutskever-1851491172

How Great Power Competition Is Making AI Existentially Dangerous
Harvard International Review:
hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous

Harvard International Review: Many “believe the winner of the AI race will secure global dominance.” wired.me/technology/global-ai-race-diplomacy/#:~:text=A report from the Harvard,do not fully understand or

Yoshua Bengio: We need a humanity defense organisation.
thebulletin.org/2023/10/ai-godfather-yoshua-bengio-we-need-a-humanity-defense-organization

Sam Altman on GPT-4o and Future of AI, The Logan Bartlett Show
youtube.com/watch?v=fMtbrKhXMWc&t=0s

What TED Will Look Like in 40 Years — According to Sora, OpenAI’s Unreleased Text-to-Video Model
youtube.com/watch?v=UXlPKKg4Md0&t=0s

Apple’s OpenAI deal to put AI on iPhones
forbes.com/sites/kateoflahertyuk/2024/05/13/apples-new-chatgpt-deal-heres-what-it-means-for-your-iphone

Anthropic study: Mapping the mind of a large language model
anthropic.com/news/mapping-mind-language-model

Follow me on 𝕏 🔹 https://x.com/ClintEcoHawk
FrankSocial 🔹 https://franksocial.com/profile/259802
Minds 🔹 https://www.minds.com/ecohawk/
Rumble 🔹 https://rumble.com/c/EcoHawk

𝙏𝙝𝙖𝙣𝙠𝙨 𝙁𝙤𝙧 𝙒𝙖𝙩𝙘𝙝𝙞𝙣𝙜!

Loading comments...