More People Are Going Blind. AI Can Help Fight It - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
More People Are Going Blind. AI Can Help Fight It - WIRED
Since 2017, ophthalmology has been the busiest of all the medical specialties in the UK’s National Health Service in terms of clinical appointments. Nearly 10 percent of all NHS outpatient appointments are related to eye problems. That’s nearly 10 million appointments per year, and that number has risen by more than a third in the past five years. Between the ages of 18 and 65, the main cause of blindness is diabetic eye disease. But the population is getting older, and we’re also seeing an increasing prevalence of diseases like age-related macular degeneration (AMD). That's the most common cause of blindness. A recent study in the British Journal of Ophthalmology estimated that 25.3 percent of people in Europe who are above the age of 60 have early signs of AMD. In the UK, about 200 people a day are also developing a severe form of AMD, called wet AMD, which causes blindness as a result of bleeding at the back of the eye. Ophthalmologists are struggling to see and treat all these patients. Unfortunately, that means that many are going blind because of delays in diagnosis and treatment. All the evidence suggests that early detection and treatment equals safe sight. Technology can mitigate these challenges. New eye scanners called optical coherence tomography (OCT) devices are being deployed in every optometry practice, such as your local Specsavers or Vision Express. These advanced scanners can take really high-resolution images of the retina in a noninvasive way. This is promising but also presents a challenge. Community optometrists don’t always have expertise to analyze OCT scans, so they are currently over-referring patients to eye hospitals, which further contributes to the burden. AI can bring world-leading expertise from places like Moorfields Eye Hospital into the community. In 2018, in collaboration with DeepMind, we published a proof-of-concept paper in Nature showing that an AI system could analyze OCT scans and assess them for more than 50 retinal diseases, with a level of performance on par with expert ophthalmologists. Since then, we’ve been trying to clinically validate the system by training the algorithm on a diverse set of data that will ensure that it works for any patient, regardless of ethnicity and clinical setting. Once we achieve that, the AI system can be deployed at scale in the community. The algorithm will be able to identify and prioritize people with the worst prognosis in local practices so we can treat them first in hospitals. This will reduce the burden of chronic diseases like AMD. The innovation of medical AI is analogous to Thomas Edison’s invention of the electric light bulb. Edison figured that to bring on the dawning of the electrical age he needed more than just a light bulb: He needed a network of innovations like a powered electricity generator, a grid distribution system to get electricity to people’s homes, and a meter reader to measure how much electricity was being used. We’re getting to that point with ophthalmology AI. We have optometry practices with OCT machines, which we are starting to link to the cloud. We’re kickstarting national transformation programs in the NHS for eye diseases, which will put in place pathways and payment systems that facilitate the transfer of patients from the community to the hospital. Once all of these innovations start coming together, this network of innovations will allow AI to finally be deployed. This article appears in the July/August 2023 edition of WIRED UK magazine.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
33
views
How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS - The Hacker...
Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception.
Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT's general availability, today's ChatGPT and AI tool usage is undoubtedly higher. Security and risk teams are already overwhelmed protecting their SaaS estate (which has now become the operating system of business) from common vulnerabilities such as misconfigurations and over permissioned users. This leaves little bandwidth to assess the AI tool threat landscape, unsanctioned AI tools currently in use, and the implications for SaaS security. With threats emerging outside and inside organizations, CISOs and their teams must understand the most relevant AI tool risks to SaaS systems — and how to mitigate them.
1 — Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols As ambitious employees devise ways for AI tools to help them accomplish more with less, so, too, do cybercriminals. Using generative AI with malicious intent is simply inevitable, and it's already possible.
AI's ability to impersonate humans exceedingly well renders weak SaaS authentication protocols especially vulnerable to hacking. According to Techopedia, threat actors can misuse generative AI for password-guessing, CAPTCHA-cracking, and building more potent malware. While these methods may sound limited in their attack range, the January 2023 CircleCI security breach was attributed to a single engineer's laptop becoming infected with malware. Likewise, three noted technology academics recently posed a plausible hypothetical for generative AI running a phishing attack: "A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company's marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn't look like the messages they've been trained to detect." Malicious actors will avoid the most fortified entry point — typically the SaaS platform itself — and instead target more vulnerable side doors. They won't bother with the deadbolt and guard dog situated by the front door when they can sneak around back to the unlocked patio doors.
Relying on authentication alone to keep SaaS data secure is not a viable option. Beyond implementing multi-factor authentication (MFA) and physical security keys, security and risk teams need visibility and continuous monitoring for the entire SaaS perimeter, along with automated alerts for suspicious login activity. These insights are necessary not only for cybercriminals' generative AI activities but also for employees' AI tool connections to SaaS platforms.
2 — Employees Connect Unsanctioned AI Tools to SaaS Platforms Without Considering the Risks Employees are now relying on unsanctioned AI tools to make their jobs easier. After all, who wants to work harder when AI tools increase effectiveness and efficiency? Like any form of shadow IT, employee adoption of AI tools is driven by the best intentions. For example, an employee is convinced they could manage their time and to-do's better, but the effort to monitor and analyze their task management and meetings involvement feels like a large undertaking. AI can perform that monitoring and analysis with ease and provide recommendations almost instantly, giving the employee the productivity boost they crave in a fraction of the time. Signing up for an AI scheduling assistant, from the end-user's perspective, is as simple and (seemingly) innocuous as: Registering for a free trial or enrolling with a credit card Agreeing to the AI tool's Read/Write permission requests
Connect...
95
views
How people are really using AI (and what they're afraid of) - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How people are really using AI (and what they're afraid of) - The Verge
We polled 2,000 people about how they’re using AI, what they want it to do, and what scares them about it the most. Illustrations by Diana Young for The Verge Jun 26, 2023, 2:00PM AI is about to change the world — the problem is, no one's quite sure how. Some look at the past year’s rapid progress and see opportunities to remove creative constraints, automate rote work, and discover new ways to learn and teach. Others see how this tech can disrupt our lives in more damaging ways: how it can generate misinformation, destroy or diminish jobs, and, if left unchecked, pose a serious threat to our safety. Tech leaders, lawmakers, and researchers have all been weighing in on how we should handle this emerging tech. Some industry figures, like OpenAI CEO Sam Altman, want AI giants to steer regulation, shifting the focus to perceived future threats, including the “risk of extinction.” Others, like EU politicians, are more concerned with current dangers and banning dangerous use cases (while holding back positive applications, say skeptics). Meanwhile, many small artists would just like a guarantee that they won’t be replaced by machines. To find out what people really think about AI and what they want from it, The Verge teamed up with Vox Media’s Insights and Research team and the research consultancy firm The Circus to poll more than 2,000 US adults on their thoughts, feelings, and fears about AI. The results tell the story of an emerging, uncertain, and exciting technology — where many have yet to use it, many are fearful of its potential, and many still have great hopes for what it could someday do for them. Who’s using AI? Who’s using AI? AI is suddenly everywhere. Image generators and large language models are at the core of new startups, powering features inside our favorite apps, and — perhaps more importantly — driving conversation not just in the tech world but also society at large. Concerns abound about cheating in schools with ChatGPT, being fooled by AI-generated pictures, and artists being ripped off or even outright replaced. But despite widespread news coverage, use of these new tools is still fairly limited, at least when it comes to dedicated AI products. And experience with these tools skews decidedly toward younger users. Most people have heard of ChatGPT. Bing and Bard? Not quite. Heard of it vs. What is it? Tool I've used it I've heard of it I've never heard of it ChatGPT 20% 37% 43% Bing w/ ChatGPT 12% 34% 54% My AI (Snapchat) 12% 33% 55% Bard (by Google) 10% 28% 62% Midjourney 7% 18% 75% Stable Diffusion 6% 17% 77% Only 1 in 3 people have tried one of these AI-powered tools, and most aren’t familiar with the companies and startups that make them. Despite the many insurgents in the world of AI, like Stability AI and Midjourney, it’s still the work of Big Tech that substantially steers the conversation.OpenAI is the major exception — but arguably, thanks to its market cap and deals with Microsoft, it is itself now a member of the corpo-club. AI use is dominated by Millennials and Gen Z Generational knowledge of AI tools Boomers: 4.8 million Gen X: 15.8 million Millenial: 36 million Gen Z: 34.9 million One complicating factor, though, is that the definition of an AI tool is extremely fuzzy. We asked respondents about dedicated AI services like ChatGPT or Midjourney. But many companies are adding AI features to established software, whether that’s image generation in Photoshop or text suggestion in Gmail and Google Docs. And as the joke goes, AI is whatever computers haven’t done yet,meaning yesterday’s AI is, simply, today’s expected features. Despite the limited usage of these tools so far, people have high expectations for AI’s impact on the world—beyond those of other emergent (and sometimes controversial) technologies. Nearly three-quarters of people said AI will have a large or moderate impact on society. That’s compared to 69 percent for electric vehicles and a paltry 34 percent for NFTs. They’re so 2021. Will these technologies have a big impact on society? Impact on society AI Electric vehicles Virtual Reality AR NFTs Large/Moderate Impact 74% 69% 60% 52% 34% How is AI being used? How is AI being used? The main fuel for the recent boom is generative AI: systems that can generate text, help brainstorm ideas, edit...
26
views
A.I.'s Use in Elections Sets Off a Scramble for Guardrails - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
A.I.'s Use in Elections Sets Off a Scramble for Guardrails - The New York Times
Gaps in campaign rules allow politicians to spread images and messaging generated by increasingly powerful artificial intelligence technology. Fake images of downtown Toronto, generated by artificial intelligence, illustrated a mayoral candidate’s campaign material. June 25, 2023, 5:00 a.m. ET In Toronto, a candidate in this week’s mayoral election who vows to clear homeless encampments released a set of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park. In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop. In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality. What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world. Increasingly, political consultants, election researchers and lawmakers say setting up new guardrails, such as legislation reining in synthetically generated ads, should be an urgent priority. Existing defenses, such as social media rules and services that claim to detect A.I. content, have failed to do much to slow the tide. As the 2024 U.S. presidential race starts to heat up, some of the campaigns are already testing the technology. The Republican National Committee released a video with artificially generated images of doomsday scenarios after President Biden announced his re-election bid, while Gov. Ron DeSantis of Florida posted fake images of former President Donald J. Trump with Dr. Anthony Fauci, the former health official. The Democratic Party experimented with fund-raising messages drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans. Some politicians see artificial intelligence as a way to help reduce campaign costs, by using it to create instant responses to debate questions or attack ads, or to analyze data that might otherwise require expensive experts. At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email blast full of false narratives churned out by computer or a fabricated image of urban decay can reinforce prejudices and widen the partisan divide by showing voters what they expect to see, experts say. The technology is already far more powerful than manual manipulation — not perfect, but fast improving and easy to learn. In May, the chief executive of OpenAI, Sam Altman, whose company helped kick off an artificial intelligence boom last year with its popular ChatGPT chatbot, told a Senate subcommittee that he was nervous about election season. He said the technology’sability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation” was“a significant area of concern.” Representative Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle “is poised to be the first election where A.I.-generated content is prevalent.” She and other congressional Democrats, including Senator Amy Klobuchar of Minnesota, have introduced legislation that would require political ads that used artificially generated material to carry a disclaimer. A similar bill in Washington State was recently signed into law. The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its ethics code. “People are going to be tempted to push the envelope and see where they can take things,” said Larry Huynh, the group’s incoming president. “As with any tool, there can be bad uses and bad actions using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.” The technology’s recent intrusion into politics came as a surprise in Toronto, a city that supports a thriving ecosy...
142
views
The Monk Who Thinks the World Is Ending - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The Monk Who Thinks the World Is Ending - The Atlantic
Can Buddhism fix AI? By Annie Lowrey Photographs by Venice Gordon Venice Gordon for The Atlantic June 25, 2023, 7:31 AM ET The monk paces the Zendo, forecasting the end of the world. Soryu Forall, ordained in the Zen Buddhist tradition, is speaking to the two dozen residents of the monastery he founded a decade ago in Vermont’s far north. Bald, slight, and incandescent with intensity, he provides a sweep of human history. Seventy thousand years ago, a cognitive revolution allowed Homo sapiens to communicate in story—to construct narratives, to make art, to conceive of god. Twenty-five hundred years ago, the Buddha lived, and some humans began to touch enlightenment, he says—to move beyond narrative, to break free from ignorance. Three hundred years ago, the scientific and industrial revolutions ushered in the beginning of the “utter decimation of life on this planet.” Humanity has “exponentially destroyed life on the same curve as we have exponentially increased intelligence,” he tells his congregants. Now the “crazy suicide wizards” of Silicon Valley have ushered in another revolution. They have created artificial intelligence. Human intelligence is sliding toward obsolescence. Artificial superintelligence is growing dominant, eating numbers and data, processing the world with algorithms. There is “no reason” to think AI will preserve humanity, “as if we’re really special,” Forall tells the residents, clad in dark, loose clothing, seated on zafu cushions on the wood floor. “There’s no reason to think we wouldn’t be treated like cattle in factory farms.” Humans are already destroying life on this planet. AI might soon destroy us. From the July/August 2023 issue: The coming humanist renaissance For a monk seeking to move us beyond narrative, Forall tells a terrifying story. His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth.” The residents there meditate on their breath and on metta, or loving-kindness, an emanation of joy to all creatures. They meditate in order to achieve inner clarity. And they meditate on AI and existential risk in general—life’s violent, early, and unnecessary end. Does it matter what a monk in a remote Vermont monastery thinks about AI? A number of important researchers think it does. Forall provides spiritual advice to AI thinkers, and hosts talks and “awakening” retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years. Forall recently visited Tom Gruber, one of the inventors of Siri, at his home in Maui for a week of dharma dinners and snorkeling among the octopuses and neon fish. Forall’s first goal is to expand the pool of humans following what Buddhists call the Noble Eightfold Path. His second is to influence technology by influencing technologists. His third is to change AI itself, seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code. Forall knows this sounds ridiculous. Some people have laughed in his face when they hear about it, he says. But others are listening closely. “His training is different from mine,” Gruber told me. “But we have that intellectual connection, where we see the same deep system problems.” Forall describes the project of creating an enlightened AI as perhaps “the most important act of all time.” Humans need to “build an AI that walks a spiritual path,” one that will persuade the other AI systems not to harm us. Life on Earth “depends on that,” he told me, arguing that we should devote half of global economic output—$50 trillion, give or take—to “that one thing.” We need to build an “AI guru,” he said. An “AI god.” A sign inside the Zendo (Venice Gordon for The Atlantic) His vision is dire and grand, but perhaps that is why it has found such a receptive audience among the folks building AI, many of whom conceive of their work in similarly epochal terms. No one can know for sure what this technology will become; when we imagine the future, we have no choice but to rely on myths and forecasts and science fiction—on stories. Does Forall’s story have the weight of prophecy, or is it just one that AI alarmists are telling themselves? In the Zendo, Forall finishes his talk and answers...
83
views
5 AI tools for translation - Cointelegraph
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
5 AI tools for translation - Cointelegraph
Translation is the process of converting written or spoken content from one language to another while preserving its meaning.By automating and enhancing the translation process, artificial intelligence (AI) has significantly contributed to changing the translation industry. To evaluate and comprehend the structure, syntax and context of the source language and produce correct translations in the target language, AI-powered translation systems use machine learning algorithms and natural language processing techniques. Types of AI-powered translation systems AI-powered translation systems can be categorized into two main approaches: Rule-based machine translation (RBMT) To translate text, RBMT systems use dictionaries and pre-established linguistic rules. Linguists and other experts create these guidelines and dictionaries that specify how to translate words, phrases and grammatical structures. While RBMT systems are capable of producing accurate translations for some language pairs, they frequently face limitations due to the complexity and diversity of linguistic systems, which makes them less useful for translations that are more complex. Statistical machine translation (SMT) SMT systems employ statistical models that have been developed using sizable bilingual corpora. These algorithms analyze the words and phrases in the source and target languages to find patterns and correlations. SMT systems are able to make educated assumptions about the ideal translation for a particular input by examining enormous volumes of data. With more training data, SMT systems get more accurate, although they may have trouble with unusual or rare phrases. Neural machine translation (NMT) has recently become more well-known in the translation industry. To produce translations, NMT systems use deep learning methods, notably neural networks. Compared to earlier methods, these models are better able to represent the context, semantics and complexities of languages. NMT systems have proven to perform better than other technologies, and they are widely employed in many well-known translation services and applications. Advantages ofAI in translation The use of AI in translation offers several advantages: Speed and efficiency: AI-powered translation systems can process large volumes of text quickly, accelerating the translation process and improving productivity. Consistency: AI ensures consistent translations by adhering to predefined rules and learned patterns, reducing errors and discrepancies. Customization and adaptability: AI models can be fine-tuned and customized for specific domains, terminologies or writing styles, resulting in more accurate and contextually appropriate translations. Continuous improvement: AI systems can learn from user feedback and update their translation models over time, gradually improving translation quality. AI tools for translation There are several AI tools available for translation that leverage machine learning and natural language processing techniques. Here are five popular AI tools for translation: Google Translate Google Translateis a widely used AI-powered translation tool. To offer translations for different language pairs, it combines rule-based and neural machine translation models. It offers functionalities for text translation, website translation and even speech-to-text and text-to-speech. Google Translate offers both free and paid versions. The basic translation services, including text translation, website translation and basic speech-to-text features, are accessible to users for free. However, Google also offers a paid service called Google Translate API for developers and businesses with more extensive translation needs. API usage is subject to pricing based on the number of characters translated. Microsoft Translator Another capable AI translation tool is Microsoft Translator. It offers translation services for many different languages and makes use of neural machine translation models. It offers developers APIs and SDKs so they may incorporate translation functionality into their projects. Microsoft Translator offers a tiered pricing model. It has a free tier that allows users to access basic translation services with certain limitations. Microsoft also provides paid plans for higher volume and advanced features. The pricing is typi...
46
views
5 Super Semiconductor Stocks to Buy Hand Over Fist for the AI Revolution - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
5 Super Semiconductor Stocks to Buy Hand Over Fist for the AI Revolution - The Motley Fool
Many people would associate artificial intelligence (AI) with chatbots like ChatGPT or even Bard, which was released more recently by Alphabet's Google. It makes sense because the software is consumer-facing. But those platforms wouldn't exist without the advanced semiconductor hardware used to train each AI model.
In fact, investors might find that some of the greatest AI opportunities over the next few years are actually in the hardware space where companies are racing to build infrastructure to support the advanced technology.
Let's look at five stocks involved in AI's hardware space, starting with a name most investors are likely already familiar with. Image source: Getty Images. 1. Nvidia
The increased hype around AI helped Nvidia (NVDA -1.90%) become synonymous with AI this year, but its history with the technology goes back much further. In fact, Nvidia delivered the first AI supercomputer to OpenAI back in 2016, and Nvidia's graphics processing unit (GPU) chips have been used to train the ChatGPT platform ever since. Nvidia CEO Jensen Huang estimates there is $1 trillion worth of data center infrastructure in use today that needs to be upgraded to support accelerated computing and AI, and his company has an estimated 90% market share in that segment.
As a result, the world's largest cloud providers are rushing to get their hands on Nvidia's hardware, but some have also partnered with the chipmaker to deliver its DGX supercomputer to their customers. This will enable millions of regular businesses to access the computing power necessary to train their AI models without investing billions of dollars to build the infrastructure.
Nvidia stock has soared 190% this year and officially surpassed a $1 trillion valuation. Its price-to-earnings (P/E) ratio of 139 is over four times more expensive than the 31 P/E of the Nasdaq-100 index. So investors interested in this stock should tread with caution in the short term. But in the long run, I think Nvidia has the potential to eventually become the largest company in the world.
2. Axcelis Technologies Axcelis Technologies (ACLS -0.92%) is a far smaller company than Nvidia, with a valuation of just $5.4 billion, but its stock has more than doubled this year on the back of an incredibly strong operating performance. Axcelis doesn't produce chips; it's a semiconductor-service company that makes ion implantation equipment critical to the fabrication process.
The world's leading chipmakers will likely experience heightened demand for hardware as AI adoption continues to grow, and they'll need the equipment produced by companies like Axcelis to expand their production capacity. In fact, over the last 12 months, Axcelis received more orders than it can possibly fill, leading to a record-high backlog worth $1.27 billion.
The company generated $922 million in revenue in 2022, representing year-over-year growth of 38%, while many other chip companies were succumbing to the broader economic slowdown. This year, Axcelis expects to deliver $1.03 billion in revenue (a recently lifted forecast) as it continues to work through its order backlog.
Its stock is still very attractive because it trades at a P/E ratio of just 29.2, a slight discount to the Nasdaq-100. As a result, this could be a great time for investors to buy despite its strong gains this year already.
3. Micron Technology Micron Technology (MU -1.46%) has flown under the radar this year because investors have been focused on semiconductor producers, like Nvidia, making powerful graphics processors for AI workloads. Micron, on the other hand, is a world-leading manufacturer of memory (DRAM) and storage (NAND) chips used in everything from smartphones to data centers to electric vehicles. But Micron says AI servers can require up to eight times the DRAM content of a regular server and up to three times as much storage, so the company is likely to also benefit from the broader deployment of AI in the long run. In other words, don't sleep on the future potential of this stock.
In the shorter term, Micron is grappling with challenges in its consumer segments, which have slowed significantly because people are simply buying fewer personal computers and devices amid the recent economic slowdown. As a result, Micron found itself with...
25
views
I'm Obsessed With Kylie Minogue Doing "Padam Padam" In A TikTok AI Voice - BuzzFeed
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
I'm Obsessed With Kylie Minogue Doing "Padam Padam" In A TikTok AI Voice - BuzzFeed
Kylie recently appeared on Andy Cohen's SiriusXM show and did a little impromptu rendition of "Padam Padam" for the host — because, hey, it is the song that's on everyone's minds right now. View this video on YouTube Sirius XM / Via youtube.com Then, Kylie took a left turn and offered another spin on "Padam Padam" — specifically, via an impression of the "computer generated" AI voice you hear so often all over TikTok. It's...incredible. Like, if you closed your eyes and didn't know you were listening to Kylie, you'd assume that it was coming straight from TikTok itself. @twiceblunt / Twitter / Via Twitter: @twiceblunt Just another reason why Kylie is iconic as well as an absolute legend. OK, that's all — oh, and, of course, stream "Padam Padam."
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
26
views
The Race to Prevent 'the Worst Case Scenario for Machine Learning' - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The Race to Prevent 'the Worst Case Scenario for Machine Learning' - The New York Times
A.I. companies have an edge in blocking the creation and distribution of child sexual abuse material. They’ve seen how social media companies failed. Dr. Rebecca Portnoff, the data science director at Thorn, was an author of a new report that found a small but meaningful uptick in the amount of photorealistic AI-generated child sexual abuse material. Credit... Kristian Thacker for The New York Times June 24, 2023 Updated 2:00 p.m. ET Dave Willner has had a front-row seat to the evolution of the worst things on the internet. He started working at Facebook in 2008, back when social media companies were making up their rules as they went along. As the company’s head of content policy, it was Mr. Willner who wrote Facebook’s first official community standards more than a decade ago, turning what he has said was an informal one-page list that mostly boiled down to a ban on “Hitler and naked people” into what is now a voluminous catalog of slurs, crimes and other grotesqueries that are banned across all of Meta’s platforms. So last year, when the San Francisco artificial intelligence lab OpenAI was preparing to launch Dall-E, a tool that allows anyone to instantly create an image by describing it in a few words, the company tapped Mr. Willner to be its head of trust and safety. Initially, that meant sifting through all of the images and prompts that Dall-E’s filters flagged as potential violations — and figuring out ways to prevent would-be violators from succeeding. It didn’t take long in the job before Mr. Willner found himself considering a familiar threat. Just as child predators had for years used Facebook and other major tech platforms to disseminate pictures of child sexual abuse, they were now attempting to use Dall-E to create entirely new ones. “I am not surprised that it was a thing that people would attempt to do,” Mr. Willner said. “But to be very clear, neither were the folks at OpenAI.” For all of the recent talk of the hypothetical existential risks of generative A.I., experts say it is this immediate threat — child predators using new A.I. tools already — that deserves the industry’s undivided attention. In a newly published paper by the Stanford Internet Observatory and Thorn, a nonprofit that fights the spread of child sexual abuse online, researchers found that, since last August, there has been a small but meaningful uptick in the amount of photorealistic A.I.-generated child sexual abuse material circulating on the dark web. According to Thorn’s researchers, this has manifested for the most part in imagery that uses the likeness of real victims but visualizes them in new poses, being subjected to new and increasingly egregious forms of sexual violence. The majority of these images, the researchers found, have been generated not by Dall-E but by open-source tools that were developed and released with few protections in place. In their paper, the researchers reported that less than 1 percent of child sexual abuse material found in a sample of known predatory communities appeared to be photorealistic A.I.-generated images. But given the breakneck pace of development of these generative A.I. tools, the researchers predict that number will only grow. “Within a year, we’re going to be reaching very much a problem state in this area,” said David Thiel, the chief technologist of the Stanford Internet Observatory, who co-wrote the paper with Thorn’s director of data science, Dr. Rebecca Portnoff, and Thorn’s head of research, Melissa Stroebel. “This is absolutely the worst case scenario for machine learning that I can think of.” Dr. Portnoff has been working on machine learning and child safety for more than a decade. To her, the idea that a company like OpenAI is already thinking about this issue speaks to the fact that this field is at least on a faster learning curve than the social media giants were in their earliest days. “The posture is different today,” said Dr. Portnoff. Still, she said, “If I could rewind the clock, it would be a year ago.” ‘We trust people’ In 2003, Congress passed a law banning “computer-generated child pornography” — a rare instance of congressional future-proofing. But at the time, creating such images was both prohibitively expensive and technically complex. The cost and complexity of...
37
views
AI Boomerang: Google's Internal Critic Returns From Rival OpenAI - The Information
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI Boomerang: Google's Internal Critic Returns From Rival OpenAI - The Information
By and
June 23, 2023 9:15 AM PDT Photo: Google CEO Sundar Pichai in May. Photo by Getty Images Jacob Devlin, a prominent artificial intelligence researcher who left Google for rival OpenAI in January after complaining internally about how the company trained its Bard AI chatbot software, has returned to his old job, according to a person with knowledge of the situation.
Google’s willingness to hire Devlin back, despite the fact his internal complaints became public and embarrassed the tech company, reflects the intense competition for talent in the field as a wide array of tech companies and startups race to develop services that automate tasks involving text, software code and video production.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
2
views
Reading in the Time of Books Bans and A.I. - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Reading in the Time of Books Bans and A.I. - The New York Times
Everyone loves reading. In principle, anyway. Nobody is against it, right? Surely, in the midst of our many quarrels, we can agree that people should learn to read, should learn to enjoy it and should do a lot of it. But bubbling underneath this bland, upbeat consensus is a simmer of individual anxiety and collective panic. We are in the throes of a reading crisis. Consider the evidence. Across the country, Republican politicians and conservative activists are removing books from classroom and library shelves, ostensibly to protect children from “indoctrination” in supposedly left-wing ideas about race, gender, sexuality and history. These bans have raised widespread alarm among civil libertarians and provoked a lawsuit against a school board in Florida, brought by PEN America and the largest American publisher, Penguin Random House. PEN has also joined the chorus of voices condemning censorious piety on social media and college campuses, where books deemed problematic become lightning rods for scolding and suppression. While right and left are hardly equivalent in their stated motivations, they share the assumption that it’s important to protect vulnerable readers from reading the wrong things. Including, in one Utah county, the Bible, which was taken from schoolroom shelves, like so many other books, as a result of a parental complaint — one apparently intended to expose the absurdity of such bans in the first place. Image Credit... Rodrigo Corral But maybe the real problem is that children aren’t being taught to read at all. As test scores have slumped — a trend exacerbated by the disruptions of Covid — a long-smoldering conflict over teaching methods has flared anew. Parents, teachers and administrators have rebelled against widely used progressive approaches and demanded more emphasis on phonics. In May, David Banks, the chancellor of New York City’s public schools, for many years a stronghold of “whole language” instruction, announced a sharp pivot toward phonics, a major victory for the “science of reading” movement and a blow to devotees of entrenched “balanced literacy” methods. The reading crisis reverberates at the higher reaches of the educational system too. As corporate management models and zealous state legislatures refashion the academy into a gated outpost of the gig economy, the humanities have lost their luster for undergraduates. According to reports in The New Yorker and elsewhere, fewer and fewer students are majoring in English, and many of those who do (along with their teachers) have turned away from canonical works of literature toward contemporary writing and pop culture. Is anyone reading “Paradise Lost” anymore? Are you? Beyond the educational sphere lie technological perils familiar and new: engines of distraction like streaming (what we used to call TV) and TikTok; the post-literate alphabets of emojis and acronyms; the dark enchantments of generative A.I. While we binge and scroll and D.M., the robots, who are doing more and more of our writing, may also be taking over our reading. While we binge and scroll and D.M., the robots, who are doing more and more of our writing, may also be taking over our reading. There is so much to worry about. A quintessentially human activity is being outsourced to machines that don’t care about phonics or politics or beauty or truth. A precious domain of imaginative and intellectual freedom is menaced by crude authoritarian politics. Exposure to the wrong words is corrupting our children, who aren’t even learning how to decipher the right ones. Our attention spans have been chopped up and commodified, sold off piecemeal to platforms and algorithms. We’re too busy, too lazy, too preoccupied to lose ourselves in books. You could argue that these disparate concerns don’t add up to a single crisis. You could point out that not all the news is bad. Sales of printed books, after dropping in the early e-book era, have crept upward over the past decade. This newspaper has reported that some young people in Brooklyn are abandoning their smartphones for “Crime and Punishment.” And the bad news is hardly new. Tyrants, philistines, religious zealots and hysterical parents have been banning books for as long as anyone can remember. The current battle between advocates of the science of reading...
64
views
Military AI's Next Frontier: Your Work Computer - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Military AI's Next Frontier: Your Work Computer - WIRED
It’s probably hard to imagine that you are the target of spycraft, but spying on employees is the next frontier of military AI. Surveillance techniques familiar to authoritarian dictatorships have now been repurposed to target American workers. Over the past decade, a few dozen companies have emerged to sell your employer subscriptions for services like “open source intelligence,” “reputation management,” and “insider threat assessment”—tools often originally developed by defense contractors for intelligence uses. As deep learning and new data sources have become available over the past few years, these tools have become dramatically more sophisticated. With them, your boss may be able to use advanced data analytics to identify labor organizing, internal leakers, and the company’s critics. It’s no secret that unionization is already monitored by big companies like Amazon. But the expansion and normalization of tools to track workers has attracted little comment, despite their ominous origins. If they are as powerful as they claim to be—or even heading in that direction—we need a public conversation about the wisdom of transferring these informational munitions into private hands. Military-grade AI was intended to target our national enemies, nominally under the control of elected democratic governments, with safeguards in place to prevent its use against citizens. We should all be concerned by the idea that the same systems can now be widely deployable by anyone able to pay. FiveCast, for example, began as an anti-terrorism startup selling to the military, but it has turned its tools over to corporations and law enforcement, which can use them to collect and analyze all kinds of publicly available data, including your social media posts. Rather than just counting keywords, FiveCast brags that its “commercial security” and other offerings can identify networks of people, read text inside images, and even detect objects, images, logos, emotions, and concepts inside multimedia content. Its “supply chain risk management” tool aims to forecast future disruptions, like strikes, for corporations. Network analysis tools developed to identify terrorist cells can thus be used to identify key labor organizers so employers can illegally fire them before a union is formed. The standard use of these tools during recruitment may prompt employers to avoid hiring such organizers in the first place. And quantitative risk assessment strategies conceived to warn the nation against impending attacks can now inform investment decisions, like whether to divest from areas and suppliers who are estimated to have a high capacity for labor organizing. It isn’t clear that these tools can live up to their hype. For example, network analysis methods assign risk by association, which means that you could be flagged simply for following a particular page or account. These systems can also be tricked by fake content, which is easily produced at scale with new generative AI. And some companies offer sophisticated machine learning techniques, like deep learning, to identify content that appears angry, which is assumed to signal complaints that could result in unionization, though emotion detection has been shown to be biased and based on faulty assumptions. But these systems’ capabilities are growing rapidly. Companies are advertising that they will soon include next-generation AI technologies in their surveillance tools. New features promise to make exploring varied data sources easier through prompting, but the ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system. What’s more, these subscription services work even if they don’t work. It may not matter if an employee tarred as a troublemaker is truly disgruntled; executives and corporate security could still act on the accusation and unfairly retaliate against them. Vague aggregate judgements of a workforce’s “emotions” or a company’s public image are presently impossible to verify as accurate. And the mere presence of these systems likely has chilling effect on legally protected behaviors, including labor organizing.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
118
views
YouTube is getting AI-powered dubbing - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
YouTube is getting AI-powered dubbing - The Verge
YouTube wants to make it easier to dub your videos in other languages by giving you some help with AI. The company announced Thursday at VidCon that it’s bringing over the team from Aloud, an AI-powered dubbing service from Google’s Area 120 incubator. Here’s how it works, according to Aloud’s website. The tool first transcribes your video, giving you a transcription that you can review and edit. Then, it translates and produces the dub. This video has the details. YouTube is already testing the tool with “hundreds” of creators, YouTube’s Amjad Hanif says in a statement to The Verge. And Hanif says that Aloud currently supports a “few” languages, with “more to come”; according to spokesperson Jessica Gibby, Aloud is currently available in English, Spanish, and Portuguese. Still, even with a limited number of languages, Aloud could be a useful tool as a growing number of creators add multi-language dubs to their videos. And if you want to hear an example of Aloud’s results for yourself, check out the Spanish dub track in this video from the Amoeba Sisters channel. (Click the gear icon, then “Audio track.”) Down the line, YouTube is “working to make translated audio tracks sound like the creator’s voice, with more expression, and lip sync,” Hanif says. Those features are planned for 2024, Gibby says.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
19
views
Dropbox launches $50M AI-focused venture fund, intros AI features - TechCrunch
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Dropbox launches $50M AI-focused venture fund, intros AI features - TechCrunch
Not content to sit on the sidelines of the generative AI race, Dropbox today launched Dropbox Ventures, a new $50 million venture fund focused on startups in the AI space.
The company’s first venture arm, Dropbox Ventures will provide mentorship in addition to financial support to build AI-powered products that “shape the future of work,” Dropbox VP and GM Sateesh Srinivasan told TechCrunch in an email interview.
“We want to advance the AI ecosystem and support the next generation of startups who are taking the lead in shaping the modern work experience through the power of AI,” he said. “Dropbox began as an early-stage startup with a simple idea that grew to a service used by hundreds of millions of people around the world, so we have a unique perspective on what it takes to help these types of companies get to the next phase of growth and make an impact.”
VCs have steadily increased their positions in AI over the past few years, spurred recently by the growth in generative AI. According to GlobalData, AI startups received over $52 billion in funding across more than 3,300 deals in the last year alone.
Corporate initiatives are a major source of that funding. For example, Salesforce Ventures, Salesforce’s VC division, plans to pour $500 million into startups developing generative AI technologies. Workday recently added $250 million to its existing VC fund specifically to back AI and machine learning startups. And OpenAI, the company behind the viral chatbot ChatGPT, has raised a $175 million fund to invest in AI startups.
“We’ve been investing in AI and machine learning for a long time and started incorporating machine learning across our products as far back as 2016 to make work more efficient for our customers’ and help them save time,” Srinivasan said. “In just the last few months, recent advancements in AI and machine learning have opened up a new world of possibilities that we think will help us accelerate … our mission to design a more enlightened way of working.”
New AI-powered features
Putting its money where its mouth is, Dropbox today announced new AI-powered additions to its flagship cloud storage product.
The first, called Dropbox Dash, is a “universal” search bar that can canvas across tools, content and apps from third-party platforms including Google Workspace, Microsoft Outlook, Salesforce and Notion. Designed to help find and organize various types of content, Dash will “learn, evolve and improve” the more customers use it, Dropbox says. Dropbox Dash. Image Credits: Dropbox “Soon, Dash will be able to pull from your information and your company’s information to answer questions and surface relevant content using generative AI,” the company wrote in a blog post. “You won’t need to sift through all your company’s internal links and pages to find out when the next company holiday is — you’ll just be able to ask Dash and get an answer, fast.”
In addition to surfacing content, Dash can create collections — Stacks — for links, offering a way to save, organize and retrieve URLs. Stacks are accessible from the new Start Page, which also hosts shortcuts to recently accessed work in Dropbox and the Dash search bar. The new Start Page in Dropbox. Image Credits: Dropbox Dropbox’s other new AI innovation is Dropbox AI, which summarizes and extracts information from files stored in a Dropbox account.
Dropbox AI — powered by an OpenAI model via OpenAI’s API — can review and generate summaries from documents as well as video previews. And it can answer questions in a chatbot-like fashion, drawing from the contents of research papers, contracts, meeting recordings and more.
At launch, Dropbox AI works with file previews. But it’ll soon expand to folders and entire Dropbox accounts.
“Dash and Dropbox AI are just the latest examples of how AI and machine learning can improve the way our customers work,” Srinivasan said. “It’s clear that customers need more personalized AI, and we see applications across our entire portfolio to truly reimagine those experiences … We believe the cloud world is missing an organizational layer across everything and we believe Dropbox is uniquely suited to be that self-organizing digital container.”
Given AI’s tendency to go off the rails, one might wonder about the accuracy of Dropbox AI’s summaries. A...
33
views
Opera launches revamped browser equipped with an AI sidekick - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Opera launches revamped browser equipped with an AI sidekick - The Verge
Opera has launched Opera One — a new version of the browser that comes packaged with an AI-powered chatbot called Aria. Just like the Bing chatbot on Microsoft Edge, Opera’s AI assistant lives within the browser’s sidebar, where you can have it answer questions using real-time information, generate text or code, brainstorm ideas, and more. The built-in chatbot is powered by Opera’s Composer AI engine and connects to OpenAI’s GPT model. To use the tool, you need to sign up for an Opera account if you don’t have one already. Once that’s done, you can click the Aria icon on the left side of the screen to start chatting. While Opera first started testing the revamped version of the browser in May, now it’s available to everyone who downloads it. You can open up Aria’s command line by hitting Ctrl + / on Windows or Cmd + / on Mac. Screenshot by Emma Roth / The Verge After trying out the tool for myself, I noticed many similarities to Bing on Edge — but also a couple of key strengths. One of the nicest parts about Aria is that you don’t have to open up the sidebar to actually use it. Instead, you can open up a command line-like overlay where you can quickly type in a question or prompt. You can also highlight text directly on a webpage, which opens up a menu for Aria to translate what you’ve highlighted, explain it, or find related topics on the web. Even though Aria can do almost everything that the Bing chatbot can, it still doesn’t quite stack up to the Edge assistant. Aria doesn’t have the same type of menu system that lets you quickly select a conversation style when asking questions and also doesn’t present any one-click options that let you choose the tone, format, and length of the text you wish to generate. I asked Aria to further explore “anthropologist” after highlighting it in the text I was reading. Screenshot by Emma Roth / The Verge You can still tweak Aria’s responses in these ways, but you just have to request it manually. Of course, Aria is still a new tool, and Opera will likely keep updating it as time goes on. Maybe Opera will eventually incorporate image generation capabilities as well, which is something that Microsoft has recently added to its browser. In addition to Aria, Opera One also comes with a couple of extra upgrades. That includes new “tab islands” that automatically group related tabs together based on their context, along with a new design and an upgraded browser architecture. You can try out Aria and the new Opera One browser for Windows, macOS, and Linux.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
40
views
Apple Is an AI Company Now - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Apple Is an AI Company Now - The Atlantic
Lots of tiny AI tweaks are quietly taking over the iPhone. Illustration by The Atlantic. Source: Getty June 20, 2023, 3:43 PM ET After more than a decade, autocorrect “fails” could be on their way out. Apple’s much-maligned spelling software is getting upgraded by artificial intelligence: Using sophisticated language models, the new autocorrect won’t just check words against a dictionary, but will be able to consider the context of the word in a sentence. In theory, it won’t suggest consolation when you mean consolidation, because it’ll know that those words aren’t interchangeable. The next generation of autocorrect was one of several small updates to the iPhone experience that Apple announced earlier this month. The Photos app will be able to differentiate between your dog and other dogs, automatically recognizing your pup the same way it recognizes people who frequently appear in your pictures. And AirPods will get smarter about adjusting to background noise based on your listening over time. All of these features are powered by AI—even if you might not know it from how Apple talks about them. Its conference unveiling the updates included zero mentions of AI , now a buzzword for tech companies of all stripes. Instead, Apple used more technical language such as machine learning or transformer language model. Apple has been quiet about the technology—so quiet that it has been accused of falling behind. Indeed, whereas ChatGPT can write halfway-decent business proposals, Siri can set your morning alarm and not much else. But Apple is pushing forward with AI in small ways, an incrementalist approach that nonetheless still might be the future of where this technology is headed. Since ChatGPT debuted last fall, tech leaders have not been very subtle about AI’s potential—for good and for evil. Sam Altman, the CEO at OpenAI, tweeted last month that AI “is the most amazing tool yet created.” The Microsoft founder Bill Gates has called AI “the most important advance in technology since the graphical user interface.” At a Google conference, Alphabet CEO Sundar Pichai said “AI” 27 times in a 15-minute speech. (He’s also been known to say that AI will be “more profound” than fire.) Apple, meanwhile, isn’t even pretending to talk a big game when it comes to AI. John Gruber, a longtime Apple follower who runs the technology blog Daring Fireball, told me that he doesn’t expect any of the machine-learning features Apple announced this year to significantly alter the iPhone-user experience. They’ll just make it nominally better. “We expect autocorrect to just work,” he told me over email. “We notice when it doesn’t.” The new autocorrect, which will be available in an iOS upgrade later this year, is sort of like a less powerful ChatGPT in your pocket. Apple says the software will be better at fine-tuning itself to how we type, as well at predicting what words and phrases we will use next. When you ask ChatGPT a question, you are accessing the same giant large language model stored on the cloud that everyone else is. But the much smaller and more personalized language model that will now power autocorrect will be living on your iPhone. Apple has not shared more details on how the feature will work, and the exact technical approach that Apple is using here is not clear, Tatsunori Hashimoto, a computer scientist at Stanford University, told me. Researchers, including Hashimoto, have been hard at work figuring out how to scale down large language models so that they fit on a mobile device. Meanwhile, AirPods will now use “Adaptive Audio” to analyze sound around you and adjust accordingly. For example, your Airpods might automatically lower the volume of your music when you start talking to the barista at a coffee shop, and then raise it when you stop. Apple says it will use machine learning to understand your volume preferences in general and optimize your listening experience. All of this is deeply Apple, Gruber said: focusing on what a feature does rather than how it does it. “The fact that it’s using AI behind the scenes is no more relevant to users than, say, which programming language they used to create it,” Gruber said. It also emphasizes user privacy, which Apple has long prioritized (or at least claimed to prioritize). Because the company is using an “on device” model, it could...
51
views
Inside the AI Factory: the humans that make tech seem human - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Inside the AI Factory: the humans that make tech seem human - The Verge
This article is a collaboration between New York Magazine and The Verge. A few months after graduating from college in Nairobi, a 30-year-old I’ll call Joe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence. AI learns by finding patterns in enormous quantities of data, but first that data has to be sorted and tagged by people, a vast workforce mostly hidden behind the machines. In Joe’s case, he was labeling footage for self-driving cars — identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of — frame by frame and from every possible camera angle. It’s difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10. Then, in 2019, an opportunity arose: Joe could make four times as much running an annotation boot camp for a new company that was hungry for labelers. Every two weeks, 50 new recruits would file into an office building in Nairobi to begin their apprenticeships. There seemed to be limitless demand for the work. They would be asked to categorize clothing seen in mirror selfies, look through the eyes of robot vacuum cleaners to determine which rooms they were in, and draw squares around lidar scans of motorcycles. Over half of Joe’s students usually dropped out before the boot camp was finished. “Some people don’t know how to stay in one place for long,” he explained with gracious understatement. Also, he acknowledged, “it is very boring.” But it was a job in a place where jobs were scarce, and Joe turned out hundreds of graduates. After boot camp, they went home to work alone in their bedrooms and kitchens, forbidden from telling anyone what they were working on, which wasn’t really a problem because they rarely knew themselves. Labeling objects for self-driving cars was obvious, but what about categorizing whether snippets of distorted dialogue were spoken by a robot or a human? Uploading photos of yourself staring into a webcam with a blank expression, then with a grin, then wearing a motorcycle helmet? Each project was such a small component of some larger process that it was difficult to say what they were actually training AI to do. Nor did the names of the projects offer any clues: Crab Generation, Whale Segment, Woodland Gyro, and Pillbox Bratwurst. They were non sequitur code names for non sequitur work. As for the company employing them, most knew it only as Remotasks, a website offering work to anyone fluent in English. Like most of the annotators I spoke with, Joe was unaware until I told him that Remotasks is the worker-facing subsidiary of a company called Scale AI, a multibillion-dollar Silicon Valley data vendor that counts OpenAI and the U.S. military among its customers. Neither Remotasks’ or Scale’s website mentions the other. Much of the public response to language models like OpenAI’s ChatGPT has focused on all the jobs they appear poised to automate. But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behavior, and even less is known about the people doing the shaping. For Joe’s students, it was work stripped of all its normal trappings: a schedule, colleagues, knowledge of what they were working on or whom they were working for. In fact, they rarely called it work at all — just “tasking.” They were taskers. The anthropologist David Graeber defines “bullshit jobs” as employment without meaning or purpose, work that should be automated but for reasons of bureaucracy or status or inertia is not. These AI jobs are their bizarro twin: work that people want to automate, and often think is already automated, yet still requires a human stand-in. The jobs have a purpose; it’s just that workers often have no idea what it is. The current AI boom — the convincingly human-sounding chatbots, the artwork that can be generated from simple prompts, and the multibillion-dollar valuations of the companies behind these technologies — began with a...
76
views
OpenAI Considers Creating an App Store for AI Software - The Information
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
OpenAI Considers Creating an App Store for AI Software - The Information
OpenAI Considers Creating an App Store for AI SoftwareRead more By and
June 20, 2023 6:00 AM PDT Photo: OpenAI CEO Sam Altman. Photo by Getty OpenAI—an early mover in releasing chatbots powered by large-language models—is contemplating another initiative to extend its influence in the world of artificial intelligence.
The company is considering launching a marketplace in which customers could sell AI models they customize for their own needs to other businesses, according to two people with knowledge of discussions at the company. Get access to exclusive coverage Read deeply reported stories from the largest newsroom in tech. Stay in the know Receive a summary of the day's top tech news—distilled into one email. Access on the go View stories on our mobile app and tune into our weekly podcast. Join live video QA’s Deep-dive into topics like startups and autonomous vehicles with our top reporters and other executives. Enjoy a clutter-free experience Read without any banner ads. Cookies on The Information We use cookies for a number of reasons, such as keeping The Information reliable and secure, personalizing content and ads, providing social media features and to analyze how our sites are used.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
7
views
How Generative AI Helps Bring Big Design Ideas to Life - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How Generative AI Helps Bring Big Design Ideas to Life - CNET
I'm a terrible artist. Though I dabble with 3D design, I have zero drawing ability and my painting skills are even worse. That doesn't stop me from having an excess of creative ideas, though, and it's demotivating not being able to bring those ideas to life. Generative AI, when used properly, can allow people with big ideas and little skill to carry those concepts into the real world. Machine learning and AI are all the rage, with OpenAI, Google and others striving to give us large language models capable of natural-sounding responses. In the visual world, companies are bringing generative AI to art, allowing us to make images using nothing but words (Midjourney), or by creating and adapting photos with AI (Adobe). These tools have a chance to make art accessible in a way that's never been achieved before. I'm a maker, a person who loves to create physical things in the real world, but it seemed like AI wouldn't really help me with that. Sure, several of my 3D printers, like the AnkerMake M5, use AI to spot errors in the print, but that's rudimentary at best. I'd seen nothing to make me think AI could help realize my ideas. That is, until I saw a video from another maker, Andrew Sink, who used text prompts through ChatGPT to create a 3D object that could be 3D printed at home using code. "I almost missed a flight because I was so captivated the first time I tried it!" Sink told me. "Seeing ChatGPT produce what looked like a 3D model in the form of an .STL file was an exhilarating experience." An STL file is a 3D printable file that uses a set of instructions to create triangles called faces. Sink used ChatGPT to create the STL file, completely circumventing the design process and putting it in the hands of AI, and it worked. It was a simple cube, but this was the first time I thought about how AI could produce tangible products in the physical world. Sink is the first to admit that generative AI still needs some supervision by someone with technical chops: "Upon closer examination (as documented via YouTube Short), the file had multiple issues and required cleanup in the form of mesh editing, something that many users will likely not expect. This brought me back to reality, and reminded me to think about ChatGPT as a tool to be used in a workflow, and not a complete solution." However, it does open the door to something more. New companies have started springing up, using generative AI to create artwork from text-based commands — called prompts — and some of the results these companies are producing are spectacular. Two-dimensional art is already breathtaking If you're looking for something that transforms words into 2D imagery, it's hard to beat Midjourney. The company runs its service mainly through Discord and produces stunning images from text prompts. My wife and I are working on a project to convert our basement into a 1920s speakeasy, complete with a bar, pool table, dartboard, leather couches, and booths to play board games. It's ambitious; there's a lot of wall space we need to cover, so we wanted to try some generative art for our walls. The idea was to give us completely unique art in the exact style we wanted in a color scheme that matched our room. Enlarge Image We wanted to create a good 1920s feel in both images from Midjourney. Illustration by Midjourney We had to learn the craft of "prompt engineering" to write the kind of detailed text prompts required to produce the image we wanted. We tried two different prompts for the images above. Left image: "A 1920s street scene with suited men walking on the sidewalk. People have umbrellas open and it is raining. A tram is in the picture with a red color on the tram. Grainy photograph." Right image: "A 1920s Art Deco speakeasy with lots of hanging lights and red leather couches. Old photograph style." While the images themselves aren't perfect — check out the gentleman with an umbrella for a hat on the left of the image — they're good enough to be hung in our basement. The imperfections even add to the fun of having them AI-generated. Adobe also released a generative AI tool for Photoshop that can do something similar to what Midjourney does, and perhaps go even further, expanding your images or editing them in new and interesting ways. You can see a lot of problems on the fringes of thi...
47
views
1
comment
AI music that contains 'no human authorship' won't be eligible for a Grammy Award - Music Busin...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI music that contains 'no human authorship' won't be eligible for a Grammy Award - Music Business Worldwide
The Recording Academy has updated its rules to include a section on generative artificial intelligence (AI) for the 2024 Grammy Awards. “Only human creators are eligible to be submitted for consideration for, nominated for, or win a Grammy Award,” according to the updated Grammy Awards Rules and Guidelines. The updated rules further stipulated that a work that contains no human authorship is not eligible in any categories for the Awards. However, the Recording Academy noted that a work that features elements of AI material is eligible in applicable categories, although the human authorship component of the work submitted “must be meaningful.” According to the new guidelines, “The Grammy Award recognizes creative excellence. Only human creators are eligible to be submitted for consideration for, nominated for, or win a Grammy Award. They continue: “A work that contains no human authorship is not eligible in any Categories. A work that features elements of A.I. material (i.e., material generated by the use of artificial intelligence technology) is eligible in applicable Categories; however: (1) the human authorship component of the work submitted must be meaningful and more than de minimis; (2) such human authorship component must be relevant to the Category in which such work is entered (e.g., if the work is submitted in a songwriting Category, there must be meaningful and more than de minimis human authorship in respect of the music and/or lyrics; if the work is submitted in a performance Category, there must be meaningful and more than de minimis human authorship in respect of the performance); and (3) the author(s) of any A.I. material incorporated into the work are not eligible to be nominees or Grammy recipients insofar as their contribution to the portion of the work that consists of such A.I material is concerned. De minimis is defined as lacking significance or importance; so minor as to merit disregard.” Recording Academy CEO Harvey Mason Jr. recently discussed these changes in an interview with Grammy.com , underscoring the importance of accounting for AI’s influence on the music industry. Mason stressed the need to adapt and establish standards to accommodate AI’s impact on the artistic community and society at large. “The idea of being caught off guard by [artificial intelligence] and not addressing it is unacceptable.” Harvey Mason Jr., Recording academy “The idea of being caught off guard by it and not addressing it is unacceptable. Not knowing exactly what it’s going to mean or do in the next months and years gives me some pause and some concerns. But I absolutely acknowledge that it’s going to be a part of the music industry and the artistic community and society at large,” Mason said. The updated Grammy Award rules follow recent developments in the use of AI in music creation and comes amid an ongoing debate around the ethics surrounding AI-generated tracks, for example, one that went viral earlier this year featuring “fake” vocals of artists Drake and The Weeknd. The proliferation of this technology recently prompted the European Union to ask tech giants like Google, Meta, TikTok and Microsoft to start labeling AI-generated content on its services. Back in March, the Recording Academy was one of the signatories of an open letter calling on all AI labs around the world “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. Added Mason in the interview published on Grammy.com: “So, we have to start planning around that and thinking about what that means for us. How can we adapt to accommodate? How can we set guardrails and standards? There are a lot of things that need to be addressed around AI as it relates to our industry,” Mason said the Recording Academy will require a songwriting-based category for the Grammy Awards to be mostly written by a human. “Same goes for performance categories – only a human performer can be considered for a Grammy,” he added. “If AI did the songwriting or created the music, that’s a different consideration. But the Grammy will go to human creators at this point.” Other notable updates for the upcoming 66th Grammy Awards include the consolidation of the Award fields from 26 to 11 “to make sure voters were voting i...
144
views
Inside China's underground market for high-end Nvidia AI chips - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Inside China's underground market for high-end Nvidia AI chips - Reuters
HONG KONG/SHENZHEN, China, June 20 (Reuters) - Psst! Where can a Chinese buyer purchase top-end Nvidia (NVDA.O) AI chips in the wake of U.S. sanctions? Visiting the famed Huaqiangbei electronics area in the southern Chinese city of Shenzhen is a good bet - in particular, the SEG Plaza skyscraper whose first 10 floors are crammed with shops selling everything from camera parts to drones. The chips are not advertised but asking discreetly works. They don't come cheap. Two vendors there, who spoke with Reuters in person on condition of anonymity, said they could provide small numbers of A100 artificial intelligence chips made by the U.S. chip designer, pricing them at $20,000 a piece - double the usual price. While buying or selling high-end U.S. chips is not illegal in China, U.S. export restrictions have created a de facto underground market with vendors keen not to draw scrutiny from either U.S. or Chinese authorities. President Joe Biden's administration in September ordered Nvidia to stop exporting its two most advanced chips - the A100 and the recently developed H100 - to mainland China and Hong Kong, part of efforts to stymie Chinese AI and supercomputing development amid intensifying political and trade tensions. That was then followed up with an array of semiconductor-related export controls. But, as AI booms across the globe after the runaway success of OpenAI's ChatGPT, demand for high-end chips has rocketed, particularly for Nvidia's microprocessors which are widely regarded as the best at handling machine-learning tasks. "We are talking with two vendors now to get some," said Ivan Lau, co-founder of Hong Kong's Pantheon Lab who is trying to purchase 2-4 new A100 cards to run the startup's latest AI models. Those vendors, who bought the chips outside the U.S., were quoting HK$150,000 ($19,150) per card, he said, adding: "They told us straight up that there will be no warranty or support." Reuters spoke with 10 vendors in Hong Kong and mainland China who described being able to easily procure small numbers of A100s. Their information highlighted both intense demand in China for the chips and the relative ease with which Washington's sanctions can be circumvented for small-batch transactions. Reuters was not able to estimate overall volumes of Nvidia A100 and H100 chips flowing into China or learn to what extent the transactions taking place go towards satisfying demand. Buyers are typically app developers, startups, researchers or gamers, the vendors said, declining to be identified because the imports contravene U.S. trade restrictions. One vendor said buyers also included Chinese local authorities. Nvidia said in a statement to Reuters it did not allow exports of the A100 or H100 to China, instead providing reduced-capability substitutes that comply with U.S. law. "If we receive information that a customer is breaching their agreement with us and exporting restricted products in violation of the law, we would take immediate and appropriate action," the statement said. The U.S. Department of Commerce, China's State Council Information Office and China's industry ministry did not respond to requests for comment. Nvidia said in September that $400 million in sales during its third quarter could be lost if Chinese firms decided not to buy alternative Nvidia products. Its new China-tailored slower variants - the A800 and H800 - developed to cushion that impact are now being bought by large Chinese tech firms such as Tencent Holdings (0700.HK) and Alibaba (9988.HK), which have deep pockets to purchase huge quantities. OFFERINGS ONLINE The Chinese vendors said they procured the chips primarily in two ways: snatching up excess stock that finds its way to the market after Nvidia ships large quantities to big U.S. firms, or importing through companies locally incorporated in places such as India, Taiwan and Singapore. This means the quantities they can secure are small, far from what's needed to build a sophisticated AI large language model from scratch. A model similar to OpenAI's GPT would require more than 30,000 Nvidia A100 cards, according to research firm TrendForce. But a handful can run complex machine-learning tasks and enhance existing AI models. According to an electronics procurement website that listed some 40 sellers of A1...
98
views
Nvidia Is About to Change Gaming Forever With AI - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Nvidia Is About to Change Gaming Forever With AI - The Motley Fool
Nvidia's (NVDA 0.09%) explosive growth in its data center segment recently overtook gaming as the company's largest source of revenue. But gaming is still a large business, generating a third of Nvidia's total revenue in the most recent quarter. With over 1.7 billion PC gamers worldwide and growing, gaming is still an important growth driver for Nvidia that investors shouldn't forget about, especially as it continues to introduce advancements in gaming with artificial intelligence (AI).
The launch of Nvidia's real-time ray tracing technology in 2018 kicked off one of its strongest GPU upgrade cycles, but the era of new AI solutions means the best is yet to come. Nvidia just announced a new AI-based solution for game developers called Nvidia Avatar Cloud Engine (ACE) that stands to change how games are made and cement its enormous lead over Advanced Micro Devices.
ACE is an exciting technology for gamers
Nvidia revolutionized gaming with the 2018 launch of the GeForce RTX gaming GPUs. The ray tracing technology in these graphics cards used AI to render more realistic scenes, but Nvidia is now using AI to add greater realism to in-game characters.
Game characters using Nvidia's ACE use natural language models -- the processing technology behind OpenAI's ChatGPT. This will make the dialogue players see from non-playable characters, or NPCs, more dynamic and less scripted. NPCs will be able to respond intelligently to the player's dialogue choices, consistent with the NPC's narrative backstory.
Video games have been progressing toward this more immersive gaming experience for a long time, but Nvidia ACE could take video game realism to another level. NVIDIA ACE for games at Computex 2023. Image source: Nvidia. What does this mean for Nvidia's business
It's difficult to quantify what ACE will mean for revenue. Between fiscal 2018 and fiscal 2022, Nvidia's gaming revenue more than doubled to $12.4 billion.
However, ACE is a technology built inside of Nvidia's Omniverse development platform, which is used for the creation of 3D graphics and applications. Specifically, ACE is a foundry service, or cloud engine, running on Omniverse that developers can tap into when creating digital avatars or characters for video games.
ACE might impact the revenue growth of Nvidia's professional visualization segment, which includes Omniverse. The pro visualization business reported just $226 million in revenue last quarter. But taking all of the revenue Nvidia generates from graphics, including Omniverse and gaming chips, it totaled $11.9 billion in fiscal 2023, or 44% of the entire business. Nvidia to competitors: Catch me if you can
ACE is another example of why Nvidia looks unstoppable right now. It already has a massive lead over AMD in the discrete GPU market, with an 84% market share. It's also estimated that Nvidia's share of the AI chip market is over 80%, although AMD could chip away at that lead as it ramps up investment in AI computing. It was just reported that Amazonis considering using AMD's AI chips for its cloud servers.
Ultimately, what makes Nvidia so dominant is the large installed base of 4 million developers that use its CUDA programming model and software libraries to build GPU applications.Nvidia knows what graphics artists and software developers need from computing hardware, and it delivers.
Nvidia's stock is expensive, trading at a forward price-to-earnings (P/E) ratio of 54. I wouldn't buy the stock just for the gaming opportunity alone, but a near-term recovery in graphics-related revenue is a catalyst for accelerating revenue growth on top of the strong demand coming for AI chips. While graphics revenue fell 25% last year, compute and networking, including data center chips, grew 36%.
If both segments get rolling soon, that could support Nvidia's high P/E. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. John Ballard has positions in Advanced Micro Devices and Amazon.com. The Motley Fool has positions in and recommends Advanced Micro Devices, Amazon.com, and Nvidia. The Motley Fool has a disclosure policy.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
321
views
1
comment
Tech entrepreneur Hogarth will head UK's AI taskforce - Reuters UK
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Tech entrepreneur Hogarth will head UK's AI taskforce - Reuters UK
LONDON, June 18 (Reuters) - The British government said on Sunday that tech entrepreneur Ian Hogarth would head its new taskforce to look at the safety risks posed by artificial intelligence. Last week, Prime Minister Rishi Sunak pitched London as a potential global home of AI regulation, with Britain set to host a summit on its risks later this year. Hogarth, who co-founded concert discovery service SongKick, which was sold to Warner Music in 2017, has been chosen to lead the UK's AI Foundation Model Taskforce with the brief of taking forward cutting-edge safety research in the run up to that summit. "The Prime Minister has laid out a bold vision for the UK to supercharge the field of AI safety, one that until now has been under-resourced even as AI capabilities have accelerated," Hogarth said in a statement. "I’m honoured to have the chance to chair such an important mission in the lead up to the first global summit on AI Safety in the UK." In April, the government committed an initial 100 million pounds ($128.17 million) towards setting up the taskforce, which will look at the risks around AI and carry out research on safety. It will also help with the development of international protections, such as shared safety and security standards and infrastructure. "The more artificial intelligence progresses, the greater the opportunities are to grow our economy and deliver better public services," Sunak said in the statement. "But with such potential to transform our future, we owe it to our children and our grandchildren to ensure AI develops safely and responsibly." ($1 = 0.7802 pounds) Reporting by Michael Holden; Editing by Sharon Singleton Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
7
views
We should all be worried about AI infiltrating crowdsourced work - TechCrunch
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
We should all be worried about AI infiltrating crowdsourced work - TechCrunch
A new paper from researchers at Swiss university EPFL suggests that between 33% and 46% of distributed crowd workers on Amazon’s Mechanical Turk service appear to have “cheated” when performing a particular task assigned to them, as they used tools such as ChatGPT to do some of the work. If that practice is widespread, it may turn out to be a pretty serious issue. Amazon’s Mechanical Turk has long been a refuge for frustrated developers who want to get work done by humans. In a nutshell, it’s an application programming interface (API) that feeds tasks to humans, who do them and then return the results. These tasks are usually the kind that you wish computers would be better at. Per Amazon, an example of such tasks would be: “Drawing bounding boxes to build high-quality datasets for computer vision models, where the task might be too ambiguous for a purely mechanical solution and too vast for even a large team of human experts.”
Data scientists treat datasets differently according to their origin — if they’re generated by people or a large language model (LLM). However, the problem here with Mechanical Turk is worse than it sounds: AI is now available cheaply enough that product managers who choose to use Mechanical Turk over a machine-generated solution are relying on humans being better at something than robots. Poisoning that well of data could have serious repercussions.
“Distinguishing LLMs from human-generated text is difficult for both machine learning models and humans alike,” the researchers said. The researchers therefore created a methodology for figuring out whether text-based content was created by a human or a machine.
The test involved asking crowdsourced workers to condense research abstracts from the New England Journal of Medicine into 100-word summaries. It is worth noting that this is precisely the kind of task that generative AI technologies such as ChatGPT are good at.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
14
views
How generative AI is creating new classes of security threats - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How generative AI is creating new classes of security threats - VentureBeat
June 18, 2023 11:10 AM Image Credit: Created with Midjourney Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More The promised AI revolution has arrived. OpenAI’s ChatGPT set a new record for the fastest-growing user base and the wave of generative AI has extended to other platforms, creating a massive shift in the technology world. It’s also dramatically changing the threat landscape — and we’re starting to see some of these risks come to fruition. Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis. Users are often putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now Misuse of AI is increasingly on the minds of consumers, businesses, and even the government. The White House announced new investments in AI research and forthcoming public assessments and policies. The AI revolution is moving fast and has created four major classes of issues. Asymmetry in the attacker-defender dynamic Attackers will likely adopt and engineer AI faster than defenders, giving them a clear advantage. They will be able to launch sophisticated attacks powered by AI/ML at an incredible scale at low cost. Social engineering attacks will be first to benefit from synthetic text, voice and images. Many of these attacks that require some manual effort — like phishing attempts that impersonate IRS or real estate agents prompting victims to wire money — will become automated. Attackers will be able to use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to rapidly generate polymorphic code for malware that evades detection from signature-based systems. One of AI’s pioneers, Geoffrey Hinton, made the news recently as he told the New York Times he regrets what he helped build because “It is hard to see how you can prevent the bad actors from using it for bad things.” Security and AI: Further erosion of social trust We’ve seen how quickly misinformation can spread thanks to social media. A University of Chicago Pearson Institute/AP-NORC Poll shows 91% of adults across the political spectrum believe misinformation is a problem and nearly half are worried they’ve spread it. Put a machine behind it, and social trust can erode cheaper and faster. The current AI/ML systems based on large language models (LLMs) are inherently limited in their knowledge, and when they don’t know how to answer, they make things up. This is often referred to as “hallucinating,” an unintended consequence of this emerging technology. When we search for legitimate answers, a lack of accuracy is a huge problem. This will betray human trust and create dramatic mistakes that have dramatic consequences. A mayor in Australia, for instance, says he may sue OpenAI for defamation after ChatGPT wrongly identified him as being jailed for bribery when he was actually the whistleblower in a case. New attacks Over the next decade, we will see a new generation of attacks on AI/ML systems. Attackers will influence the classifiers that systems use to bias models and control outputs. They’ll create malicious models that will be indistinguishable from the real models, which could cause real harm depending on how they’re used. Prompt injection attacks will become more common, too. Just a day after Microsoft introduced Bing Chat, a Stanford University student convinced the model to reveal its internal directives. Attackers will kick off an arms race with adversarial ML tools that trick AI systems in various ways, poison the data they use or extract sensiti...
62
views