Premium Only Content

How Easy Is It to Fool A.I.-Detection Tools? - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How Easy Is It to Fool A.I.-Detection Tools? - The New York Times
The pope did not wear Balenciaga. And filmmakers did not fake the moon landing. In recent months, however, startlingly lifelike images of these scenes created by artificial intelligence have spread virally online, threatening society’s ability to separate fact from fiction.
To sort through the confusion, a fast-burgeoning crop of companies now offer services to detect what is real and what isn’t.
Their tools analyze content using sophisticated algorithms, picking up on subtle signals to distinguish the images made with computers from the ones produced by human photographers and artists. But some tech leaders and misinformation experts have expressed concern that advances in A.I. will always stay a step ahead of the tools.
To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short.
Consider this example:
Generated by A.I.
This image appears to show the billionaire entrepreneur Elon Musk embracing a lifelike robot. The image was created using Midjourney, the A.I. image generator, by Guerrero Art, an artist who works with A.I. technology.
Despite the implausibility of the image, it managed to fool several A.I.-image detectors.
Test results from the image of Mr. Musk
The detectors, including versions that charge for access, such as Sensity, and free ones, such as Umm-maybe’s A.I. Art Detector, are designed to detect difficult-to-spot markers embedded in A.I.-generated images. They look for unusual patterns in how the pixels are arranged, including in their sharpness and contrast. Those signals tend to be generated when A.I. programs create images.
But the detectors ignore all context clues, so they don’t process the existence of a lifelike automaton in a photo with Mr. Musk as unlikely. That is one shortcoming of relying on the technology to detect fakes.
Several companies, including Sensity, Hive and Inholo, the company behind Illuminarty, did not dispute the results and said their systems were always improving to keep up with the latest advancements in A.I.-image generation. Hive added that its misclassifications may result when it analyzes lower-quality images. Umm-maybe and Optic, the company behind A.I. or Not, did not respond to requests for comment.
To conduct the tests, The Times gathered A.I. images from artists and researchers familiar with variations of generative tools such as Midjourney, Stable Diffusion and DALL-E, which can create realistic portraits of people and animals and lifelike portrayals of nature, real estate, food and more. The real images used came from The Times’s photo archive.
Here are seven examples:
Note: Images cropped from their original size.
Detection technology has been heralded as one way to mitigate the harm from A.I. images.
A.I. experts like Chenhao Tan, an assistant professor of computer science at the University of Chicago and the director of its Chicago Human+AI research lab, are less convinced.
“In general I don’t think they’re great, and I’m not optimistic that they will be,” he said. “In the short term, it is possible that they will be able to perform with some accuracy, but in the long run, anything special a human does with images, A.I. will be able to re-create as well, and it will be very difficult to distinguish the difference.”
Most of the concern has been on lifelike portraits. Gov. Ron DeSantis of Florida, who is also a Republican candidate for president, was criticized after his campaign used A.I.-generated images in a post. Synthetically generated artwork that focuses on scenery has also caused confusion in political races.
Many of the companies behind A.I. detectors acknowledged that their tools were imperfect and warned of a technological arms race: The detectors must often play catch-up to A.I. systems that seem to be improving by the minute.
“Every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator,” said Cynthia Rudin, a computer science and engineering professor at Duke University, where she is also the principal investigator at the Interpretable Machine Learning Lab. “The generators are designed to be abl...
-
2:59:11
Side Scrollers Podcast
16 hours agoDEI’s FINAL BOSS EXPOSED + Book Publisher REVERSES Cancel Attempt + More | Side Scrollers
44.5K18 -
23:00
The Pascal Show
7 hours agoCANDACE EXPOSES TEXTS! Candace Owens Shows Proof Charlie Kirk Was B*llied By Donors Before His Death
57 -
1:31:37
The HotSeat
11 hours agoBondi On The Hill + Equitable Grading? We Are Failing Our KIDS!
20.4K4 -
6:05
Spooky Grandpa's Scary Stories
1 month agoTHE HARVEST MAN (Halloween, Horror, Folklore, Supernatural, Paranormal)
1.32K4 -
LIVE
Lofi Girl
2 years agoSynthwave Radio 🌌 - beats to chill/game to
123 watching -
1:02:11
DeVory Darkins
10 hours ago $25.68 earnedDemocrats suffers ANNIHILATION during heated hearing with Bondi as Jack Smith bombshell drops
120K99 -
3:00:07
Price of Reason
10 hours agoJoe Rogan & Theo Von TURN on Trump? Hollywood to STOP Lecturing Viewers? Ghost of Yotei FIASCO!
54.7K8 -
4:49
Russell Brand
13 hours agoThis is Unbelievable...
58K53 -
2:55:33
Badlands Media
13 hours agoDEFCON ZERQ Ep. 012: Featuring "AND WE KNOW" and a Special Guest
59K54 -
2:56:36
TimcastIRL
7 hours agoLEAKED Memo Says NO BACK PAY For Federal Workers Amid Government Shutdown | Timcast IRL
285K188