Premium Only Content
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images - CNN
Will 2024 be America's A.I. election? 01:32 - Source: CNN New York CNN — For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities. But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.” McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI. Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate. “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.” Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted. Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.” A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not. But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.” “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.” “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.” Companies are broadly taking two approaches to address the issue. One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature. Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data. Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerato...
-
57:26
X22 Report
12 hours agoMr & Mrs X - The Food Industry Is Trying To Pull A Fast One On RFK Jr (MAHA), This Will Fail - EP 14
117K77 -
2:01:08
LFA TV
1 day agoTHE RUMBLE RUNDOWN LIVE @9AM EST
169K15 -
1:28:14
On Call with Dr. Mary Talley Bowden
10 hours agoI came for my wife.
42.5K37 -
1:06:36
Wendy Bell Radio
15 hours agoPet Talk With The Pet Doc
82.6K37 -
30:58
SouthernbelleReacts
3 days ago $9.64 earnedWe Didn’t Expect That Ending… ‘Welcome to Derry’ S1 E1 Reaction
56K12 -
13:51
True Crime | Unsolved Cases | Mysterious Stories
5 days ago $18.10 earned7 Real Life Heroes Caught on Camera (Remastered Audio)
68.7K17 -
LIVE
Total Horse Channel
21 hours ago2025 IRCHA Derby & Horse Show - November 1st
55 watching -
4:19
PistonPop-TV
7 days ago $8.68 earnedThe 4E-FTE: Toyota’s Smallest Turbo Monster
50.6K3 -
43:07
WanderingWithWine
6 days ago $5.68 earned5 Dreamy Italian Houses You Can Own Now! Homes for Sale in Italy
37.3K9 -
18:12:15
Spartan
1 day agoFirst playthrough of First Berserker Khazan
49.5K1