Premium Only Content
The Race to Prevent 'the Worst Case Scenario for Machine Learning' - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The Race to Prevent 'the Worst Case Scenario for Machine Learning' - The New York Times
A.I. companies have an edge in blocking the creation and distribution of child sexual abuse material. They’ve seen how social media companies failed. Dr. Rebecca Portnoff, the data science director at Thorn, was an author of a new report that found a small but meaningful uptick in the amount of photorealistic AI-generated child sexual abuse material. Credit... Kristian Thacker for The New York Times June 24, 2023 Updated 2:00 p.m. ET Dave Willner has had a front-row seat to the evolution of the worst things on the internet. He started working at Facebook in 2008, back when social media companies were making up their rules as they went along. As the company’s head of content policy, it was Mr. Willner who wrote Facebook’s first official community standards more than a decade ago, turning what he has said was an informal one-page list that mostly boiled down to a ban on “Hitler and naked people” into what is now a voluminous catalog of slurs, crimes and other grotesqueries that are banned across all of Meta’s platforms. So last year, when the San Francisco artificial intelligence lab OpenAI was preparing to launch Dall-E, a tool that allows anyone to instantly create an image by describing it in a few words, the company tapped Mr. Willner to be its head of trust and safety. Initially, that meant sifting through all of the images and prompts that Dall-E’s filters flagged as potential violations — and figuring out ways to prevent would-be violators from succeeding. It didn’t take long in the job before Mr. Willner found himself considering a familiar threat. Just as child predators had for years used Facebook and other major tech platforms to disseminate pictures of child sexual abuse, they were now attempting to use Dall-E to create entirely new ones. “I am not surprised that it was a thing that people would attempt to do,” Mr. Willner said. “But to be very clear, neither were the folks at OpenAI.” For all of the recent talk of the hypothetical existential risks of generative A.I., experts say it is this immediate threat — child predators using new A.I. tools already — that deserves the industry’s undivided attention. In a newly published paper by the Stanford Internet Observatory and Thorn, a nonprofit that fights the spread of child sexual abuse online, researchers found that, since last August, there has been a small but meaningful uptick in the amount of photorealistic A.I.-generated child sexual abuse material circulating on the dark web. According to Thorn’s researchers, this has manifested for the most part in imagery that uses the likeness of real victims but visualizes them in new poses, being subjected to new and increasingly egregious forms of sexual violence. The majority of these images, the researchers found, have been generated not by Dall-E but by open-source tools that were developed and released with few protections in place. In their paper, the researchers reported that less than 1 percent of child sexual abuse material found in a sample of known predatory communities appeared to be photorealistic A.I.-generated images. But given the breakneck pace of development of these generative A.I. tools, the researchers predict that number will only grow. “Within a year, we’re going to be reaching very much a problem state in this area,” said David Thiel, the chief technologist of the Stanford Internet Observatory, who co-wrote the paper with Thorn’s director of data science, Dr. Rebecca Portnoff, and Thorn’s head of research, Melissa Stroebel. “This is absolutely the worst case scenario for machine learning that I can think of.” Dr. Portnoff has been working on machine learning and child safety for more than a decade. To her, the idea that a company like OpenAI is already thinking about this issue speaks to the fact that this field is at least on a faster learning curve than the social media giants were in their earliest days. “The posture is different today,” said Dr. Portnoff. Still, she said, “If I could rewind the clock, it would be a year ago.” ‘We trust people’ In 2003, Congress passed a law banning “computer-generated child pornography” — a rare instance of congressional future-proofing. But at the time, creating such images was both prohibitively expensive and technically complex. The cost and complexity of...
-
LIVE
SpartakusLIVE
6 hours agoLIVE from the Creator House in FLORIDA || WZ Solos to Start - PUBG, REDSEC or ARC Later?!
772 watching -
58:03
MattMorseTV
7 hours ago $59.54 earned🔴Trump is BRINGING the CHARGES. 🔴
74.4K188 -
5:31:43
EricJohnPizzaArtist
4 days agoAwesome Sauce PIZZA ART LIVE Ep. #70: Movie Night featuring Dark Helmet!
49.4K8 -
2:06:00
Joker Effect
5 hours agoMASSIVE UPDATES ON MY CHANNEL... what does 2026 look like? CHATTIN WITH WVAGABOND (The Captain).
30.5K2 -
2:24:34
vivafrei
17 hours agoEp. 292: Bondi's Betrayal & Comey Judge Caught Lying! Crooks Acted Alone? Judicia Activism & MORE!
211K173 -
8:06:14
GritsGG
10 hours ago#1 Most Warzone Wins 4015+!
146K2 -
5:14:53
Due Dissidence
14 hours agoTrump SMITTEN By Mamdani, MTG RESIGNS, Hurwitz DOUBLES DOWN on CENSORSHIP, RFK Jr "Poetry" EXPOSED
40.9K34 -
39:40
Tactical Advisor
11 hours agoUnboxing New Tactical Packs | Vault Room Live Stream 046
77.9K6 -
3:30:58
elwolfpr
9 hours agoElWolfPRX Enters the Storm: First Winds
22.2K -
14:59
MetatronHistory
1 day agoAncient Bronze Was Not the Way You Think
45.7K16