1
@SunWeatherMan (Ben Davidson): The Coming Extinction-Level Magnetic Pole Shift
26:57
2
MUST WATCH: Russia Has Declared War On The U.S. Financial System!
7:15
3
Steve Jobs: Great Ideas Are Not Enough (To Which Elon Musk Replies: "Precisely")
3:10
4
Dr. Adeel Khan On Successfully Treating Hundreds Of COVID-Vaccine-Injured Patients
1:50:01
5
Secretive San Francisco Experiment Shoots 'Aerosols' Into Sky To Cool Planet (Facts Matter)
6:47
6
Wireless Networks: Tools For Surveillance & Control (Children's Health Defense)
3:55
7
Lara Logan On The Francis Scott Key Bridge: This Is A Catastrophic CYBER Terrorist Attack!
13:42
8
Something Fishy Is Going On With Kate Middleton & The Royal Family...
12:48
9
Pentagon Says No Evidence Of Alien Technology: NewsNation Says Pentagon Report "A Complete Joke"
5:56
Matt Walsh: AI Dystopian Hell Is Here! (Google's Astonishing AI Debacle)
17:55
11
What Are Chemtrails & Contrails? Del Bigtree Interviews Jim Lee To Find Out
1:16:47
12
Tesla's Strategic Masterpiece
13:36
13
Elon Musk Makes A Surprise Appearance During Alex Jones Interview With David Icke!
1:45:38
14
Apple Vision Pro (First Takes)
39:21
15
Must Watch! Preview Of The World's First AI-Powered News Network: LA-Based Channel 1
22:01
16
Robert F. Kennedy Jr. - "We Need The Best Minds In The World To Come Together" On The AI Economy
2:45
17
Elon Musk: When It Comes To Artificial Intelligence, Tesla Is Far Ahead Of Everyone!
12:50
18
Coca-Cola "Masterpiece" Ad (Created Using Generative AI)
1:52
19
Google's New ChatGPT-Style Search Could Kill The Websites That Feed It
1:32
20
A.I. Rising (Four Corners Australia)
42:34
21
Yes, Scammers Are Using Artificial Intelligence To Impersonate People You Know!
4:18
22
Elon Musk On Artificial Intelligence (AI)
19:58
23
Tucker Carlson Interviews Elon Musk: Part 1 & 2 (The Complete, Unedited Interview)
1:01:41
24
An AI-Generated Version Of Joe Rogan’s Podcast!
51:15
25
Someone Asked An Autonomous AI To Destroy Humanity. This Is What Happened...
24:43
26
Why Top AI Leaders Are Calling For A Pause In Development
4:01
27
10 Things You Can Do With ChatGPT
11:14
28
Dr. Karen Wyatt, From End-Of-Life University, Interviews ChatGPT About Death
1:04:22
29
What Is ChatGPT? (ChatGPT Explained In 10 Minutes)
10:16
30
Google Launches BARD To Compete With ChatGPT & Other AI Engines
8:29
31
Using Artificial Intelligence #AI To Communicate With Animals!
6:20
32
Colossus: The Forbin Project - A Precautionary Tale About The Power Of Artificial Intelligence
1:36:10
33
Arthur C. Clarke Predicts The Future (September 21, 1964)
11:45
34
Ecologist Allan Savory: "We're Going To Kill Ourselves Because Of Stupidity!"
1:43

Matt Walsh: AI Dystopian Hell Is Here! (Google's Astonishing AI Debacle)

2 months ago
10.1K

Matt Walsch discusses Google's astonishing artificial intelligence debacle:

"Our AI dystopian hell is here..."

Video source:
https://twitter.com/MattWalshShow/status/1760756960452821446

........

"I'm glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all."

-- Elon Musk

Source:
https://twitter.com/elonmusk/status/1760849603119947981

...............

Google Apologizes For AI Image Generator That Would 'Overcompensate' For Diversity
Associated Press
February 23, 2024

https://www.marketwatch.com/story/google-apologizes-for-ai-image-generator-that-would-overcompensate-for-diversity-9cf945a5

Google apologized Friday for its faulty rollout of a new artificial-intelligence image generator, acknowledging that in some cases the tool would "overcompensate" in seeking a diverse range of people even when such a range didn't make sense.

The partial explanation for why its images put people of color in historical settings where they wouldn't normally be found came a day after Google GOOGL, -0.09% GOOG, -0.02% said it was temporarily stopping its Gemini chatbot from generating any images with people in them. That was in response to a social-media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts.

"It's clear that this feature missed the mark," said a blog post Friday from Prabhakar Raghavan, a senior vice president who runs Google's search engine and other businesses. "Some of the images generated are inaccurate or even offensive. We're grateful for users' feedback and are sorry the feature didn't work well."

Raghavan didn't mention specific examples, but among those that drew attention on social media this week were images that depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers. The Associated Press was not able to independently verify what prompts were used to generate those images.

Google added the new image-generating feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It was built atop an earlier Google research experiment called Imagen 2.

Google has known for a while that such tools can be unwieldly. In a 2022 technical paper, the researchers who developed Imagen warned that generative-AI tools can be used for harassment or spreading misinformation "and raise many concerns regarding social and cultural exclusion and bias." Those considerations informed Google's decision not to release "a public demo" of Imagen or its underlying code, the researchers added at the time.

Since then, the pressure to publicly release generative-AI products has grown because of a competitive race between tech companies trying to capitalize on interest in the emerging technology, sparked by the advent of OpenAI's chatbot ChatGPT.

The problems with Gemini are not the first to recently affect an image generator. Microsoft MSFT, -0.32% had to adjust its own Designer tool several weeks ago after some were using it to create deepfake pornographic images of Taylor Swift and other celebrities. Studies have also shown AI image generators can amplify racial and gender stereotypes found in their training data, and without filters they are more likely to show lighter-skinned men when asked to generate a person in various contexts.

"When we built this feature in Gemini, we tuned it to ensure it doesn't fall into some of the traps we've seen in the past with image-generation technology — such as creating violent or sexually explicit images, or depictions of real people," Raghavan said Friday. "And because our users come from all over the world, we want it to work well for everyone."

He said many people might "want to receive a range of people" when asking for a picture of football players or someone walking a dog. But users looking for someone of a specific race or ethnicity or in particular cultural contexts "should absolutely get a response that accurately reflects what you ask for."

While it overcompensated in response to some prompts, in others it was "more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive."

He didn't explain what prompts he meant, but Gemini routinely rejects requests for certain subjects such as protest movements, according to tests of the tool by the AP on Friday in which it declined to generate images about the Arab Spring, the George Floyd protests or Tiananmen Square. In one instance, the chatbot said it didn't want to contribute to the spread of misinformation or "trivialization of sensitive topics."

Much of this week's outrage about Gemini's outputs originated on X, formerly Twitter, and was amplified by the social-media platform's owner Elon Musk, who decried Google for what he described as its "insane racist, anti-civilizational programming." Musk, who has his own AI startup, has frequently criticized rival AI developers as well as Hollywood for alleged liberal bias.

Raghavan said Google will do "extensive testing" before turning on the chatbot's ability to show people again.

University of Washington researcher Sourojit Ghosh, who has studied bias in AI image generators, said Friday he was disappointed that Raghavan's message ended with a disclaimer that the Google executive "can't promise that Gemini won't occasionally generate embarrassing, inaccurate or offensive results."

For a company that has perfected search algorithms and has "one of the biggest troves of data in the world, generating accurate results or unoffensive results should be a fairly low bar we can hold them accountable to," Ghosh said.

See also:

Sunfellow Artificial Intelligence #AI Resource Page
https://www.sunfellow.com/sunfellow-artificial-intelligence-ai-resource-page/

Sunfellow AI on Rumble
https://rumble.com/c/SunfellowAI

Loading 49 comments...