Premium Only Content

Do AI generated images have racial blind spots? See an example - The Boston Globe
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Do AI generated images have racial blind spots? See an example - The Boston Globe
But last week, the output she got using one startup’s tool stood out from the rest. On Friday, Wang uploaded a picture of herself smiling and wearing a red MIT sweatshirt to an image creator called Playground AI, and asked it to turn the image into “a professional LinkedIn profile photo.” In just a few seconds, it produced an image that was nearly identical to her original selfie — except Wang’s appearance had been changed. It made her complexion appear lighter and her eyes blue, “features that made me look Caucasian,” she said. “I was like, ‘Wow, does this thing think I should become white to become more professional?’” said Wang, who is Asian American. The photo, which gained traction online after Wang shared it on Twitter, has sparked a conversation about the shortcomings of artificial intelligence tools when it comes to race. It even caught the attention of the company’s founder, who said he hoped to solve the problem. Now, she thinks her experience with AI could be a cautionary tale for others using similar technology or pursuing careers in the field. Wang’s viral tweet came amid a recent TikTok trend where people have been using AI products to spiff up their LinkedIn profile photos, creating images that put them in professional attire and corporate-friendly settings with good lighting. Wang admits that, when she tried using this particular AI, at first she had to laugh at the results. “It was kind of funny,” she said. But it also spoke to a problem she’s seen repeatedly with AI tools, which can sometimes produce troubling results when users experiment with them. To be clear, Wang said, that doesn’t mean the AI technology is malicious. “It’s kind of offensive,” she said, “but at the same time I don’t want to jump to conclusions that this AI must be racist.” Experts have said that AI bias can exist under the surface, a phenomenon that’s been observed for years. The troves of data used to deliver results may not always accurately reflect various racial and ethnic groups, or may reproduce existing racial biases, they’ve said. Research — including at MIT — has found so-called AI bias in language models that associate certain genders with certain careers, or in oversights that cause facial recognition tools to malfunction for people with dark skin. Wang, who double-majored in mathematics and computer science and is returning to MIT in the fall for a graduate program, said her widely shared photo may have just been a blip, and it’s possible the program randomly generated the facial features of a white woman. Or, she said, it may have been trained using a batch of photos in which a majority of people depicted on LinkedIn or in “professional” scenes were white. It has made her think about the possible consequences of a similar misstep in a higher-stakes scenario, like if a company used an AI tool to select the most “professional” candidates for a job, and if it would lean toward people who appeared white. “I definitely think it’s a problem,” Wang said. “I hope people who are making software are aware of these biases and thinking about ways to mitigate them.” The people responsible for the program were quick to respond. Just two hours after she tweeted her photo, Playground AI founder Suhail Doshi replied directly to Wang on Twitter. “The models aren’t instructable like that so it’ll pick any generic thing based on the prompt. Unfortunately, they’re not smart enough,” he wrote in response to Wang’s tweet. “Happy to help you get a result but it takes a bit more effort than something like ChatGPT,” he added, referring to the popular AI chatbot which produces large batches of text in seconds with simple commands. “[For what it’s worth], we’re quite displeased with this and hope to solve it.” In additional tweets, Doshi said Playground AI doesn’t “support the use-case of AI photo avatars” and that it “definitely can’t preserve identity of a face and restylize it or fit it into another scene like” Wang had hoped. Reached by e-mail, Doshi declined to be interviewed. Instead, he replied to a list of questions with a question of his own: “If I roll a dice just once and get the number 1, does that mean I will always get the number 1? Should I conclude based on a single observation that the dice is biased to the number 1 and was trained to b...
-
2:45:52
DLDAfterDark
10 hours ago $16.21 earnedWhat Are We Missing From The Charlie Kirk Incident? Feat. TN Tactical - After Hours Armory Live!
45.2K11 -
16:23
True Crime | Unsolved Cases | Mysterious Stories
1 month ago $7.67 earnedThe Strange Disappearance of Mekayla Bali | (Mini-Documentary)
46.6K8 -
10:03
nospeedlimitgermany
13 days ago $10.45 earnedVW Golf 5 R32 | 250 PS | Top Speed Drive German Autobahn No Speed Limit POV
46.1K9 -
1:35
Memology 101
1 day ago $10.66 earnedChicago Mayor Johnson calls LAW ENFORCEMENT a "SICKNESS" he will "ERADICATE"
53.5K74 -
10:17
Advanced Level Diagnostics
13 days ago $9.90 earned2007 Chevy Express - Replaced Everything But The Code Remains!
52.3K1 -
1:01:11
The Mel K Show
11 hours agoMel K & Harley Schlanger | History Repeats: A Wake-Up Call for Humanity | 9-20-25
112K30 -
2:13:52
Mally_Mouse
20 hours ago🌶️ 🥵Spicy BITE Saturday!! 🥵🌶️- Let's Play: Lockdown Protocol (New Updates!)
102K6 -
12:57
Culture Apothecary with Alex Clark
1 day agoMy Last 6 Years With Charlie Kirk | In Memoriam with Alex Clark
47.7K11 -
2:48:55
Barry Cunningham
16 hours agoPRESIDENT TRUMP WILL ENSURE THAT CHARLIE KIRK DID NOT DIE IN VAIN!
82.2K142 -
2:14:52
SavageJayGatsby
16 hours ago🔥 Spicy Saturday - Let's Play: Lockdown Protocol 🔥
102K2