Premium Only Content
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images - CNN
Will 2024 be America's A.I. election? 01:32 - Source: CNN New York CNN — For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities. But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.” McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI. Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate. “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.” Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted. Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.” A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not. But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.” “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.” “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.” Companies are broadly taking two approaches to address the issue. One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature. Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data. Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerato...
-
56:45
VSiNLive
7 hours ago $5.09 earnedFollow the Money with Mitch Moss & Pauly Howard | Hour 1
68.1K2 -
52:44
Candace Show Podcast
7 hours agoMy Conversation with Only Fans Model Lilly Phillips | Candace Ep 122
86K314 -
LIVE
tacetmort3m
7 hours ago🔴 LIVE - RELIC HUNTING CONTINUES - INDIANA JONES AND THE GREAT CIRCLE - PART 5
185 watching -
26:52
Silver Dragons
6 hours agoCoin Appraisal GONE WRONG - Can I Finally Fool the Coin Experts?
32.6K2 -
UPCOMING
Bare Knuckle Fighting Championship
12 hours agoBKFC on DAZN HOLLYWOOD WARREN vs RICHMAN WEIGH IN
23.5K -
6:49:16
StoneMountain64
10 hours agoNew PISTOL meta is here?
34.9K1 -
20:58
Goose Pimples
12 hours ago7 Ghost Videos SO SCARY You’ll Want a Priest on Speed Dial
19.8K3 -
2:24:59
The Nerd Realm
9 hours ago $2.80 earnedHollow Knight Voidheart Edition #09 | Nerd Realm Playthrough
36.2K2 -
1:21:14
Awaken With JP
11 hours agoDrones are for Dummies - LIES Ep 70
119K58 -
1:47:29
vivafrei
9 hours agoJustin Trudeau Regime ON THE VERGE OF COLLAPSE! And Some More Fun Law Stuffs! Viva Frei
92K81