Premium Only Content

‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images - CNN
Will 2024 be America's A.I. election? 01:32 - Source: CNN New York CNN — For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities. But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.” McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI. Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate. “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.” Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted. Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.” A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not. But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.” “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.” “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.” Companies are broadly taking two approaches to address the issue. One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature. Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data. Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerato...
-
LIVE
Trumpet Daily
46 minutes agoTrumpet Daily LIVE | Sept. 16, 2025
462 watching -
LIVE
The Shannon Joy Show
1 hour agoTrojan Horse Trump Pushing ‘Worse Than Biden’ Speech Control Using Kirk Killing. Guest Brett Miller
238 watching -
1:01:35
VINCE
3 hours agoThe Left's 'Malignant' Violence Problem | Episode 126 - 09/16/25
191K102 -
LIVE
LFA TV
6 hours agoLFA TV ALL DAY STREAM - TUESDAY 9/16/25
4,420 watching -
1:45:59
Dear America
4 hours agoKiller ADMITS To Killing Charlie In DISCORD. Terror Cell EXPOSED! + JD Fills In on Charlie’s Show!
148K84 -
2:59:28
Wendy Bell Radio
7 hours agoThe Left Lives In A Bubble
54.6K139 -
LIVE
Barry Cunningham
3 hours agoLIVE BREAKING NEWS: KASH PATEL HEARING!
1,693 watching -
LIVE
House Committee on Energy and Commerce
2 hours agoAppliance And Building Policies: Restoring The American Dream Of Home Ownership And Consumer Choice
30 watching -
1:25:17
The Big Mig™
3 hours agoTrump Declares Antifa Is A Domestic Terrorist Organization
4.01K11 -
2:05:36
Badlands Media
8 hours agoBadlands Daily: September 16, 2025
38.7K17