Premium Only Content

Military AI's Next Frontier: Your Work Computer - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Military AI's Next Frontier: Your Work Computer - WIRED
It’s probably hard to imagine that you are the target of spycraft, but spying on employees is the next frontier of military AI. Surveillance techniques familiar to authoritarian dictatorships have now been repurposed to target American workers. Over the past decade, a few dozen companies have emerged to sell your employer subscriptions for services like “open source intelligence,” “reputation management,” and “insider threat assessment”—tools often originally developed by defense contractors for intelligence uses. As deep learning and new data sources have become available over the past few years, these tools have become dramatically more sophisticated. With them, your boss may be able to use advanced data analytics to identify labor organizing, internal leakers, and the company’s critics. It’s no secret that unionization is already monitored by big companies like Amazon. But the expansion and normalization of tools to track workers has attracted little comment, despite their ominous origins. If they are as powerful as they claim to be—or even heading in that direction—we need a public conversation about the wisdom of transferring these informational munitions into private hands. Military-grade AI was intended to target our national enemies, nominally under the control of elected democratic governments, with safeguards in place to prevent its use against citizens. We should all be concerned by the idea that the same systems can now be widely deployable by anyone able to pay. FiveCast, for example, began as an anti-terrorism startup selling to the military, but it has turned its tools over to corporations and law enforcement, which can use them to collect and analyze all kinds of publicly available data, including your social media posts. Rather than just counting keywords, FiveCast brags that its “commercial security” and other offerings can identify networks of people, read text inside images, and even detect objects, images, logos, emotions, and concepts inside multimedia content. Its “supply chain risk management” tool aims to forecast future disruptions, like strikes, for corporations. Network analysis tools developed to identify terrorist cells can thus be used to identify key labor organizers so employers can illegally fire them before a union is formed. The standard use of these tools during recruitment may prompt employers to avoid hiring such organizers in the first place. And quantitative risk assessment strategies conceived to warn the nation against impending attacks can now inform investment decisions, like whether to divest from areas and suppliers who are estimated to have a high capacity for labor organizing. It isn’t clear that these tools can live up to their hype. For example, network analysis methods assign risk by association, which means that you could be flagged simply for following a particular page or account. These systems can also be tricked by fake content, which is easily produced at scale with new generative AI. And some companies offer sophisticated machine learning techniques, like deep learning, to identify content that appears angry, which is assumed to signal complaints that could result in unionization, though emotion detection has been shown to be biased and based on faulty assumptions. But these systems’ capabilities are growing rapidly. Companies are advertising that they will soon include next-generation AI technologies in their surveillance tools. New features promise to make exploring varied data sources easier through prompting, but the ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system. What’s more, these subscription services work even if they don’t work. It may not matter if an employee tarred as a troublemaker is truly disgruntled; executives and corporate security could still act on the accusation and unfairly retaliate against them. Vague aggregate judgements of a workforce’s “emotions” or a company’s public image are presently impossible to verify as accurate. And the mere presence of these systems likely has chilling effect on legally protected behaviors, including labor organizing.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
-
2:00:48
The Charlie Kirk Show
8 hours agoTHOUGHTCRIME Ep. 97 — The Thoughtcrime WILL Continue
132K93 -
35:08
Colion Noir
13 hours agoA Bear, an AR-15, and a Home Invasion
39.1K7 -
3:05:55
TimcastIRL
9 hours agoJimmy Kimmel Refuses To Apologize Over Charlie Kirk Comments, Blames Gun Violence | Timcast IRL
200K183 -
2:44:24
Laura Loomer
12 hours agoEP144: Trump Cracks Down On Radical Left Terror Cells
56.5K26 -
4:47:56
Drew Hernandez
14 hours agoLEFTISTS UNITE TO DEFEND KIMMEL & ANTIFA TO BE DESIGNATED TERRORISTS BY TRUMP
51.7K23 -
1:12:32
The Charlie Kirk Show
9 hours agoTPUSA AT CSU CANDLELIGHT VIGIL
109K62 -
6:53:45
Akademiks
12 hours agoCardi B is Pregnant! WERE IS WHAM????? Charlie Kirk fallout. Bro did D4VID MURK A 16 YR OLD GIRL?
88.1K7 -
2:26:15
Barry Cunningham
10 hours agoPRESIDENT TRUMP HAS 2 INTERVIEWS | AND MORE PROOF THE GAME HAS CHANGED!
149K96 -
1:20:27
Glenn Greenwald
11 hours agoLee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares | SYSTEM UPDATE #518
129K35 -
1:03:06
BonginoReport
12 hours agoLyin’ Jimmy Kimmel Faces The Music - Nightly Scroll w/ Hayley Caronia (Ep.137)
177K72