Microsoft unveils next-gen AI solutions to boost frontline productivity amid labor challenges -...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Microsoft unveils next-gen AI solutions to boost frontline productivity amid labor challenges - VentureBeat
August 9, 2023 10:10 AM View of a Microsoft logo on March 10, 2021, in New York. Head over to our on-demand library to view sessions from VB Transform 2023. Register Here Microsoft today unveiled a suite of tools and integrations designed to empower frontline workers across the globe. Central to this release is an innovative Copilot offering, which harnesses the capabilities of generative AI to enhance the efficiency and effectiveness of service professionals on the frontline. The tech giant underscores the considerable magnitude of this workforce, estimating their global count at 2.7 billion, more than twice the number of desk-based workers. These individuals perform diverse roles, from customer-facing associates to dedicated healthcare providers and operational stalwarts who navigate on-site tasks. Microsoft says that over 60% of these workers grapple with monotonous tasks that detract from more meaningful endeavors. Confronted by mounting challenges stemming from labor shortages, skill gaps and supply chain disruptions, frontline workers have been increasingly tackling complex work demands. To address these concerns, Microsoft aims to equip frontline workers with the necessary technological support and resources. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now An AI-driven frontline Copilot Key among the new tools is the Copilot integrated into Dynamics 365 Field Service to assist frontline service managers and technicians. Microsoft says the generative AI-driven tool optimizes workflow by automating repetitive tasks — creating work orders, for example. Other integrations within Microsoft 365 further enhance these capabilities. Microsoft said that service managers will gain the ability to generate, schedule and oversee work orders directly within their workflow in Microsoft Outlook and Microsoft Teams. Simultaneously, frontline technicians will be able to access vital work order information through Teams. The company also unveiled a new Dynamics 365 Field Service “mobile experience” enabling frontline technicians to cut down on the number of taps for key tasks. This includes Dynamics 365 Guides integration, to provide technicians with step-by-step instructions for tasks, and access to Dynamics 365 Remote Assist, to problem-solve with remote experts in real time using 3D spatial annotations. “We believe investment in technology for frontline workers will drive positive outcomes for employees, customers and their businesses. Technology can relieve pressures on the frontline that are causing burnout as well as help organizations drive engagement and a sense of belonging that can help increase retention,” Charles Lamanna, CVP of business applications and platform at Microsoft, told VentureBeat. “Today’s announcements,” he added, “are the first steps we are taking to infuse next-gen AI and data with productivity tools like Dynamics 365 Field Service to help address the challenge of repetitive tasks and burnout. The new AI-powered Copilots use generative AI to automate the repetitive and taxing digital overhead that burdens frontline workers.” Aiding frontline productivity with generative AI Lamanna contends that AI and process automation can alleviate the burden of essential yet exhaustive procedures for frontline workers, enabling them to render swifter, well-informed choices. And says that the novel Copilot within Dynamics 365 Field Service allows frontline managers, who receive service inquiries via emails, to harness cutting-edge AI in Copilot for the direct streamlining of work order creation from within Outlook. Copilot will auto-populate pertinent data, including customer escalation summaries, into draft work orders within their workflow. Once preserved, these work orders can be synchronized with Dynamics 365 Field Service. “With updates coming soon, Copilot will streamline technician scheduling by offering data-driven recommendations based on travel time, availability, skillset and other factors as well as accelerate responses to customer messages by summarizing key details and next steps in email drafts,” explained Lamanna. “Copilot will also become available to assist front...
352
views
Zoom's terms of service change sparks worries over AI uses. Here's what to know. - CBS News
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Zoom's terms of service change sparks worries over AI uses. Here's what to know. - CBS News
AI could steal by listening to you type AI could steal information by listening to you type, study finds 04:34 When Zoom announced an update to its terms of service earlier this week that appeared to provide access to users' data for A.I. training, privacy advocates and customers rang the alarm. "Zoom's [terms of service] now demand that they use A.I. to train on audio, face and facial movements, even private conversations without recourse, unconditionally and irrevocably," scientist Bryan Jonessaidin a tweet, "Opting out is not an option." The backlash prompted Zoom to clarify its service terms in a blog post on Monday, in which it promised not to "use audio, video, or chat content for training our models without customer consent." However, privacy experts warn that while that promise is now codified in Zoom's user agreement, it doesn't prevent the company from using customer data to train A.I. As a result, many users are confused about how much of their data is being used and how to protect their privacy during digital meet-ups. Zoom did not immediately respond to a request for comment. Biden, leading tech companies announce voluntary safeguards on A.I. 03:27 Can Zoom access users' video calls to train A.I.? Zoom can use customers' video calls and chat transcripts to train A.I., as long as it has users' consent. However, if a meeting host agrees to share data with Zoom, everybody participating in the meeting must share their data during that call. This means participants who want their information to remain private must leave the Zoom call if their host consented to data-sharing. To be sure, this could be a problem for workers whose employers require them to attend Zoom sessions. "If the administrator consents and it's your boss at your work who requires you to use Zoom, how is that really consent?" Katharine Trendacosta, director of policy and advocacy at the Electronic Frontier Foundation, told the Associated Press. What kind of data can Zoom collect? There are two types of data Zoom can collect: "service-generated data" such as user locations and the features customers use to interact with the service, and "customer content," or the data created by users themselves, such as audio or chat transcripts. In its blog post, Zoom said the company considers service-generated data "to be our data," and experts confirm this language would allow the company to use such data for A.I. training without obtaining additional consent. Service-generated data may be used for "for the purpose of … machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models," according to Zoom's terms of service. As for customer content, Zoom may also use the data "for the purpose" of machine learning or A.I,. the same agreement shows. "Godfather of artificial intelligence" talks impact and potential of new AI 08:17 What is Zoom doing with A.I.? In its blog post, Zoom said it will use customer data to train artificial intelligence for AI-powered features, such as automated meeting summaries for customers. However, it's unclear if the company is working on other consumer-facing A.I. products or internal projects that will tap into customer data. Zoom's terms of service agreement is "super broad," said Caitlin Seeley George, campaigns and managing director at Fight for the Future, told CBS MoneyWatch. In that way, the company could use certain types of customer data for any number of A.I. projects, she said. "[Zoom's] updated terms of service are very broad and could allow them to do more than summarize meetings, even if they aren't doing it yet," George said. AI could steal information by listening to you type, study finds 04:34 How do I know if a meeting organizer is sharing data during our call? If a meeting organizer decides to use a feature that requires user-generated content like call or chat transcripts to be shared with Zoom, the meeting's participants will receive an alert that an A.I. feature has been enabled and that their data could be shared for machine learning, the AP reported. The app will then prompt participants to either proceed with the meeting or to leave. Zoom alternatives Privacy advocates like George recommend steering clear from Zoom until the company...
70
views
China's internet giants order $5bn of Nvidia chips to power AI ambitions - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
China's internet giants order $5bn of Nvidia chips to power AI ambitions - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
20
views
3 ways your college professor knows you're cheating with AI - CNN
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
3 ways your college professor knows you're cheating with AI - CNN
3 ways your college professor knows you're cheating with AI As artificial intelligence becomes increasingly skilled at writing high level essays, these college professors say AI cheating is something they are seeing more of. They told CNN about some easy-to-spot dead giveaways. Stories worth watching 16 videos 3 ways your college professor knows you're cheating with AI Taylor Swift, Barbie, and Beyoncé bring female boost to economy 03:01 Now playing - Source: CNN This hotel opened its doors in 705 AD. You can still stay there in 2023 01:55 Now playing - Source: CNN See burning space debris light up Melbourne's night sky 00:37 Now playing - Source: CNN Bear cub peeks out from plane's cargo hold after escaping crate 00:53 Now playing - Source: CNN GOP governor reacts to 'unhinged' Trump supporters heckling Pence 01:23 Now playing - Source: CNN Massive brawl breaks out on dock after Black employee is attacked 02:12 Now playing - Source: CNN CNN's Abby Phillip describes 'chaotic' scene at Beyoncé's DC concert 01:43 Now playing - Source: CNN Dramatic video shows house collapse into flooded Alaska river 00:32 Now playing - Source: CNN SAG-AFTRA president shares what 86% of actors on strike are really paid in a year 02:51 Now playing - Source: CNN Strategist explains how Trump's post reveals what he's 'scared of' 01:27 Now playing - Source: CNN Black men who were tortured by white officers speak out 02:36 Now playing - Source: CNN 'The Super Bowl doesn't compare to this': Taylor Swift's concert cause economic boon 02:35 Now playing - Source: CNN Video purportedly shows Chinese ship firing water cannon at Filipino vessel in disputed waters 00:40 Now playing - Source: CNN Cindy Crawford recreates iconic Super Bowl ad 31 years later 00:55 Now playing - Source: CNN DeSantis appointee on tape repeating discredited research about 'White slaves' in America 03:42 Now playing - Source: CNN
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
85
views
AI algorithm discovers 'potentially hazardous' asteroid 600 feet wide in a 1st for astronomy -...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI algorithm discovers 'potentially hazardous' asteroid 600 feet wide in a 1st for astronomy - Space.com
News Science Astronomy AI has just found its first potentially Earth-threatening space rock.
(Image credit: ATLAS/University of Hawaii Institute for Astronomy/NASA) A new artificial intelligence algorithm programmed to hunt for potentially dangerous near-Earth asteroids has discovered its first space rock. The roughly 600-foot-wide (180 meters) asteroid has received the designation 2022 SF289, and is expected to approach Earth to within 140,000 miles (225,000 kilometers). That distance is shorter than that between our planet and the moon, which are on average, 238,855 miles (384,400 km)apart. This is close enough to define the rock as a Potentially Hazardous Asteroid (PHA), but that doesn't mean it will impact Earth in the foreseeable future. The HelioLinc3D program, which found the asteroid, has been developed to help the Vera C. Rubin Observatory, currently under construction in Northern Chile, conduct its upcoming 10-year survey of the night sky by searching for space rocks in Earth's near vicinity. As such, the algorithm could be vital in giving scientists the heads up about space rocks on a collision course with Earth. "By demonstrating the real-world effectiveness of the software that Rubin will use to look for thousands of yet-unknown potentially hazardous asteroids, the discovery of 2022 SF289 makes us all safer," Vera C. Rubin researcher Ari Heinze said in a statement. Related: Super-close supernova captivates record number of citizen scientists Tens of millions of space rocks roam the solar system ranging from asteroids the size of a few feet to dwarf planets around the size of the moon. These space rocks are the remains of material that initially formed the planets around 4.5 billion years ago. While most of these objects are located far from Earth, with the majority of asteroids homed in the main asteroid belt between Mars and Jupiter, some have orbits that bring them close to Earth. Sometimes worryingly close. Space rocks that come close to Earth are defined as near-Earth objects (NEOs), and asteroids that venture to within around 5 million miles of the planet get the Potentially Hazardous Asteroid (PHA) status. This doesn't mean that they will impact the planet, though. Just as is the case with 2022 SF289, no currently known PHA poses an impact risk for at least the next 100 years.Astronomers search for potentially hazardous asteroids and monitor their orbits just to make sure they are not heading for a collision with the planet. This new PHA was found when the asteroid-hunting algorithm was paired with data from the ATLASsurvey in Hawaii, as a test of its efficiency before Rubin is completed. The discovery of 2022 SF289 has shown that HelioLinc3D can spot asteroids with fewer observations than current space rock hunting techniques allow. Rubin is ready to join the potentially hazardous asteroid hunt Searching for potentially hazardous asteroids involves taking images of parts of the sky at least four times a night. When astronomers spot a moving point of light traveling in an unambiguous straight line across the series of images, they can be quite certain they have found an asteroid. Further observations are then made to better constrain the orbit of these space rocks around the sun. The new algorithm, however, can make a detection from just two images, speeding up the whole process. Around 2,350 PHAs have been discovered thus far, and though none poses a threat of hitting Earth in the near future, astronomers aren't quite ready to relax just yet as they know that many more potentially dangerous space rocks are out there yet to be uncovered. It is estimated that the Vera Rubin Observatory could uncover as many as 3,000 hitherto undiscovered potentially hazardous asteroids. The Vera C. Rubin Observatory as it takes shape in northern Chile ready to hunt potential hazardous asteroids (Image credit: Rubin Obs/NSF/AURA) Rubin's 27-foot-wide (8.4 meters) mirror and massive 3,200-megapixel camera will revisit locations in the night sky twice per night rather than the four times a night observations conducted by current telescopes. Hence the creation of HelioLinc3D, a code that could find asteroids in Rubin’s dataset even with fewer available observations. But, the algorithm's creators...
332
views
1
comment
Meet the Brains Behind the Malware-Friendly AI Chat Service 'WormGPT' – Krebs on Security - Kre...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meet the Brains Behind the Malware-Friendly AI Chat Service 'WormGPT' – Krebs on Security - Krebs on Security
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.” Image: SlashNext.com. The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.
WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.
“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.” WormGPT’s core developer and frontman “Last” promoting the service on HackForums. Image: SlashNext. In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.
“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.” SlashNext asked WormGPT to compose this BEC phishing email. Image: SlashNext. A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.
“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.
Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.
Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.
That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.
AN OPEN BOOK
Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.
“You can ask me anything,” Morais said. “I’m an open book.”
Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.
“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of lear...
97
views
Google and Universal Music negotiate deal over AI 'deepfakes' - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google and Universal Music negotiate deal over AI 'deepfakes' - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
9
views
Disney creates task force to explore AI and cut costs - sources - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Disney creates task force to explore AI and cut costs - sources - Reuters
Aug 8 (Reuters) - Walt Disney (DIS.N) has created a task force to study artificial intelligence and how it can be applied across the entertainment conglomerate, even as Hollywood writers and actors battle to limit the industry's exploitation of the technology. Launched earlier this year, before the Hollywood writers' strike, the group is looking to develop AI applications in-house as well as form partnerships with startups, three sources told Reuters. As evidence of its interest, Disney has 11 current job openings seeking candidates with expertise in artificial intelligence or machine learning. The positions touch virtually every corner of the company - from Walt Disney Studios to the company's theme parks and engineering group, Walt Disney Imagineering, to Disney-branded television and the advertising team, which is looking to build a "next-generation" AI-powered ad system, according to the job ad descriptions. A Disney spokesperson declined to comment. One of the sources, an internal advocate who spoke on condition of anonymity because of the sensitivity of the subject, said legacy media companies like Disney must either figure out AI or risk obsolescence. This supporter sees AI as one tool to help control the soaring costs of movie and television production, which can swell to $300 million for a major film release like "Indiana Jones and the Dial of Destiny" or "The Little Mermaid." Such budgets require equally massive box office returns simply to break even. Cost savings would be realized over time, the person said. For its parks business, AI could enhance customer support or create novel interactions, said the second source as well as a former Disney Imagineer, who declined to be identified because he was not authorized to speak publicly. The former Imagineer pointed to Project Kiwi, which used machine-learning techniques to create Baby Groot, a small, free-roaming robot that mimics the "Guardians of the Galaxy" character's movements and personality. Machine learning, the branch of AI that gives computers the ability to learn without being programmed, informs its vision systems, so it is able to recognize and navigate objects in its environment. Someday, Baby Groot will interact with guests, the former Imagineer said. AI has become a powder keg in Hollywood, where writers and actors view it as an existential threat to jobs. It is a central issue in contract negotiations with the Screen Actors Guild and the Writers Guild of America, both of which are on strike. Disney has been careful about how it discusses AI in public. The visual effects supervisors who worked on the latest "Indiana Jones" movie emphasized the painstaking labors of more than 100 artists who spent three years seeking to "de-age" Harrison Ford so that the octogenarian actor could appear as his younger self in the early minutes of the film. SAG-AFTRA actors and Writers Guild of America (WGA) writers walk the picket line during their ongoing strike outside Walt Disney Studios in Burbank, California, U.S., July 31, 2023. REUTERS/Mario Anzuoni/File Photo 'STEAMBOAT WILLIE' Disney has invested in technological innovation since its earliest days. In 1928 it debuted "Steamboat Willie", the first cartoon to feature a synchronized soundtrack. It now holds more than 4,000 patents with applications in theme parks, films and merchandise, according to a search of the U.S. Patent and Trademark Office records. Bob Iger, now in his second stint as Disney's chief executive, made the embrace of technology one of his three priorities when he was first named CEO in 2005. Three years later, the company announced a major research and development initiative with top technology universities around the world, funding labs at the Swiss Federal Institute of Technology in Zurich and Carnegie Mellon University in Pittsburgh, Pennsylvania. It closed the Pittsburgh lab in 2018. Disney's U.S. research group has developed a mixed-reality technology called "Magic Bench" that allows people to share a space with a virtual character on screen, without need for special glasses. In Switzerland, Disney Research has been exploring AI, machine learning and visual computing, according to its website. It has spent the last decade creating "digital humans" that it describes as "indistinguishable" from their c...
85
views
Microsoft's AI Red Team Has Already Made the Case for Itself - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Microsoft's AI Red Team Has Already Made the Case for Itself - WIRED
For most people, the idea of using artificial intelligence tools in daily life—or even just messing around with them—has only become mainstream in recent months, with new releases of generative AI tools from a slew of big tech companies and startups, like OpenAI's ChatGPT and Google's Bard. But behind the scenes, the technology has been proliferating for years, along with questions about how best to evaluate and secure these new AI systems. On Monday, Microsoft is revealing details about the team within the company that since 2018 has been tasked with figuring out how to attack AI platforms to reveal their weaknesses. In the five years since its formation, Microsoft's AI red team has grown from what was essentially an experiment into a full interdisciplinary team of machine learning experts, cybersecurity researchers, and even social engineers. The group works to communicate its findings within Microsoft and across the tech industry using the traditional parlance of digital security, so the ideas will be accessible rather than requiring specialized AI knowledge that many people and organizations don't yet have. But in truth, the team has concluded that AI security has important conceptual differences from traditional digital defense, which require differences in how the AI red team approaches its work. “When we started, the question was, ‘What are you fundamentally going to do that’s different? Why do we need an AI red team?’” says Ram Shankar Siva Kumar, the founder of Microsoft's AI red team. “But if you look at AI red teaming as only traditional red teaming, and if you take only the security mindset, that may not be sufficient. We now have to recognize the responsible AI aspect, which is accountability of AI system failures—so generating offensive content, generating ungrounded content. That is the holy grail of AI red teaming. Not just looking at failures of security but also responsible AI failures.” Shankar Siva Kumar says it took time to bring out this distinction and make the case that the AI red team's mission would really have this dual focus. A lot of the early work related to releasing more traditional security tools like the 2020 Adversarial Machine Learning Threat Matrix, a collaboration between Microsoft, the nonprofit RD group MITRE, and other researchers. That year, the group also released open source automation tools for AI security testing, known as Microsoft Counterfit. And in 2021, the red team published an additional AI security risk assessment framework. Over time, though, the AI red team has been able to evolve and expand as the urgency of addressing machine learning flaws and failures becomes more apparent. In one early operation, the red team assessed a Microsoft cloud deployment service that had a machine learning component. The team devised a way to launch a denial of service attack on other users of the cloud service by exploiting a flaw that allowed them to craft malicious requests to abuse the machine learning components and strategically create virtual machines, the emulated computer systems used in the cloud. By carefully placing virtual machines in key positions, the red team could launch “noisy neighbor” attacks on other cloud users, where the activity of one customer negatively impacts the performance for another customer.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
73
views
Will AI be an economic blessing or curse? History offers clues - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Will AI be an economic blessing or curse? History offers clues - Reuters
Technological leaps have patchy economic records AI creates fears about job destruction, workers' rights Competition policy, access to training is key Aug 7 (Reuters) - If medieval advances in the plough didn't lift Europe's peasants out of poverty, it was largely because their rulers took the wealth generated by the new gains in output and used it to build cathedrals instead. Economists say something similar could happen with artificial intelligence (AI) if it enters our lives in such a way that the touted benefits are enjoyed by the few rather than the many. "AI has got a lot of potential - but potential to go either way," argues Simon Johnson, professor of global economics and management at MIT Sloan School of Management. "We are at a fork in the road." Backers of AI predict a productivity leap that will generate wealth and improve living standards. Consultancy McKinsey in June estimated it could add between $14 trillion and $22 trillion of value annually - that upper figure being roughly the current size of the U.S economy. Some techno-optimists go further, suggesting that, along with robots, AI is the technology that will finally free humanity from humdrum tasks and launch us into lives of more creativity and leisure. Yet worries abound about its impact on livelihoods, including its potential to destroy jobs in all kinds of sectors - witness the strike in July by Hollywood actors who fear being made redundant by their AI-generated doubles. Reuters Graphics WHAT PRODUCTIVITY GAIN? Such concerns are not unfounded. History shows the economic impact of technological advances is generally uncertain, unequal and sometimes outright malign. A book published this year by Johnson and fellow MIT economist Daron Acemoglu surveyed a thousand years of technology - from the plough through to automated self-checkout kiosks - in terms of their success in creating jobs and spreading wealth. While the spinning jenny was key to 18th century automation of the textiles industry, they found it led to longer working hours in harsher conditions. Mechanical cotton gins facilitated the 19th century expansion of slavery in the American South. The track record of the Internet is complex: it has created many new job roles even as much of the wealth generated has gone to a handful of billionaires. The productivity gains it was once lauded for have slowed across many economies. A June research note by French bank Natixis suggested that was because even a technology as pervasive as the Internet left many sectors untouched, while many of the jobs it created were low-skilled - think of the delivery chain for online purchases. [1/5]High school student Richard Erkhov is reflected on a screen of "Alnstein", a robot powered with ChatGPT, in Pascal school in Nicosia, Cyprus, March 30, 2023. REUTERS/Yiannis Kourtoglou/File Photo "Conclusion: We should be cautious when estimating the effects of artificial intelligence on labour productivity," Natixis warned. In a globalised economy, there are other reasons to doubt whether the potential gains of AI will be felt evenly. On the one hand, there is the risk of a "race to the bottom" as governments compete for AI investment with increasingly lax regulation. On the other, the barriers to luring that investment might be so high as to leave many poorer countries behind. "You have to have the right infrastructure – huge computing capacity," said Stefano Scarpetta, Director of Employment, Labour and Social Affairs at the Paris-based Organisation for Economic Cooperation and Development (OECD). "We have the G7 Hiroshima Process, we need to go further to the G20 and UN," he said, advocating the expansion of an accord at a May summit of Group of Seven (G7) powers to jointly seek to understand the opportunities and challenges of generative AI. WORKER POWER Innovation, it turns out, is the easy bit. Harder is making it work for everyone - which is where politics comes in. For MIT's Johnson, the arrival of railways in 19th century England at a moment of rapid democratic reform allowed those advances to be enjoyed by wider society, be it through faster transport of fresh food or a first taste of leisure travel. Similar democratic gains elsewhere helped millions enjoy the fruits of technological advance well into the 20th century. But Johns...
36
views
Meta disbands protein-folding team in shift towards commercial AI - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meta disbands protein-folding team in shift towards commercial AI - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
12
views
Authors are losing their patience with AI, part 349235 - TechCrunch
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Authors are losing their patience with AI, part 349235 - TechCrunch
On Monday morning, numerous writers woke up to learn that their books had been uploaded and scanned into a massive dataset without their consent. A project of cloud word processor Shaxpir, Prosecraft compiled over 27,000 books, comparing, ranking and analyzing them based on the “vividness” of their language. Many authors — including Young Adult powerhouse Maureen Johnson and “Little Fires Everywhere” author Celeste Ng — spoke out against Prosecraft for training a model on their books without consent. Even books published less than a month ago had already been uploaded.
After a day full of righteous online backlash, Prosecraft creator Benji Smith took down the website, which had existed since 2017.
“I’ve spent thousands of hours working on this project, cleaning up and annotating text, organizing and tweaking things,” Smith wrote. “But in the meantime, ‘AI’ became a thing. And the arrival of AI on the scene has been tainted by early use-cases that allow anyone to create zero-effort impersonations of artists, cutting those creators out of their own creative process.” Smith’s Prosecraft was not a generative AI tool, but authors worried it could become one, since he had amassed a dataset of a quarter billion words from published books, which he found by crawling the internet.
Prosecraft would show two paragraphs from a book, one that was “most passive” and one that was “most vivid.” It then placed the books into percentile rankings based on how vivid, how long or how passive it was.
“If you’re a writer as a career it’s maddening, in part because style is not the same as writing a fucking whitepaper for a business that needs to be in active voice or whatever,” author Ilana Masad said. “Style is style!”
Smith did not respond to multiple requests for comment, but he elaborated on his intentions in his blog post.
“Since I was only publishing summary statistics, and small snippets from the text of those books, I believed I was honoring the spirit of the Fair Use doctrine, which doesn’t require the consent of the original author,” Smith wrote. Some authors noted that the excerpts of their books on Prosecraft included major spoilers, causing further frustration. Though Smith apologized, authors remain exasperated. For artists and writers, the recent proliferation of AI tools has created a deeply frustrating game of whack-a-mole. As soon as they opt out of one database, they find that their work has been used to train another AI model, and so on. “It’s pretty much the norm, from what I can tell, for these sites and projects to do whatever they’re doing first and then hope that no one notices and then disappear or get defensive when they inevitably do,” Masad said. Generative AI and the technology behind self-publishing have created a perfect storm for scammy activities. Amazon has been flooded with low-quality, AI-generated travel guides, and even AI-generated children’s books. But tools like ChatGPT are basically trained on the sum total of the internet, so this means that real travel writers or children’s books authors could be getting inadvertently plagiarized.
Author Jane Friedman wrote in a recent blog post — titled “I’d Rather See My Books Get Pirated Than This” — that she is being impersonated on Amazon, where someone is selling books under her name that appear to be written with an AI.
Though Friedman was successful in getting these fake books removed from her Goodreads page, she says that Amazon won’t remove the books for sale unless she has a trademark for her name.
Amazon did not provide a comment before publication.
“I don’t think any writer is seriously convinced that AI is going to ruin books because like, well, that’s not how literature works, and everything I’ve seen ChatGPT write as a ‘story’ is just really fucking boring with no voice or real craft or style,” Masad said.
But she worries that publishers will be convinced otherwise, and possibly replace marketing and publicity teams with AI-generated promotional content.
“It feels really bad,” she said.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
47
views
Want to Get Richer? 2 AI Stocks to Buy Before They Skyrocket - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Want to Get Richer? 2 AI Stocks to Buy Before They Skyrocket - The Motley Fool
The SP 500 index had a strong first half of 2023 and has gained over 17.5% so far this year -- mainly driven by an artificial intelligence-fueled stock rally and easing inflation. The U.S. economy has also proved resilient and posted an unexpected 2.4% year-over-year growth rate in the second quarter, driven by a 7.7% year-over-year jump in business investment (excluding housing).
Against this backdrop, several analysts believe (with varying degrees of caution and optimism) that Wall Street now may be in the early stages of a bull market, especially since the SP 500 has rallied by more than 20% from its October 2022 low.
Investors looking to capitalize on the momentum of this bull rally should consider opening small positions in two AI stocks: Palantir (PLTR -2.73%) and Alphabet (GOOG -0.18%) (GOOGL -0.27%). Here's why.
Palantir
Data mining and analytics specialist Palantir has been a major beneficiary of Wall Street's artificial intelligence (AI) rally. But although the company's shares have gained nearly 209% so far this year, they still have significant growth potential. Veteran Wedbush analyst Dan Ives seems to agree with this premise; he recently initiated coverage for the company with an outperform (buy) rating and a price target of $25. While many companies are now racing to come up with AI-based solutions, Palantir has been offering AI-based services to its clients for the past two decades. Not resting on its laurels, the company launched a new AI platform (AIP) in April. Palantir AIP enables clients to deploy large language models on the company's internal network and use proprietary data to personalize recommendations, actions, and workflows. Thanks to its customized ChatGPT-like services, AIP has been seeing unprecedented demand from military clients.
While AIP could prove to be a game-changer for Palantir in the long run, the company has also impressed short-term investors with its focus on profitability. The company recorded a GAAP (generally accepted accounting principles) profit for the second consecutive quarter in the first quarter, and expects to remain GAAP profitable in all quarters of 2023. The company is also free-cash-flow positive and had $2.3 billion cash and no debt on its balance sheet at the end of the first quarter.
There is no doubt that Palantir is a pricey stock, trading for roughly 21 times sales -- far higher than the software industry's median price-to-sales ratio of 2.4. However, considering that Goldman Sachs expects generative AI to help increase global GDP by 7% or by $7 trillion in the next decade, a well-established AI stock could see much higher prices in the coming months.
Alphabet
Since the launch of OpenAI's ChatGPT, many analysts and investors have been worried about the growth prospects of Alphabet, which derives over half of its revenue from Google Search. While the dent the generative AI could deal to Alphabet's near-monopoly in the internet search space is now all but obvious, there are still several factors that could propel shares of this trillion-dollar company on an upward trajectory. First, advertising demand is expected to recover in a growing economy -- and that is driving up Google Search and YouTube revenues. Google Search revenues grew by 4.7% year over year in the second quarter, compared to the 1.9% year-over-year revenue growth it posted in the first quarter. YouTube revenues also rose by 4% in the second quarter, a marked improvement from their 2.6% decline in Q1.
YouTube's audience is also steadily increasing: 2 billion monthly active users are watching short-form videos on the platform, and 150 million people in the U.S. are accessing YouTube on their connected televisions.
Second, Alphabet posted a 28% year-over-year jump in revenues to $8 billion and an operating profit of $395 million (operating margin of 5%) for its Google Cloud business. Currently, 70% of the generative AI unicorn companies are using Google Cloud. While the long-term growth potential of Google Cloud is strong, management has hinted at some cautiousness for the business' growth prospects in the second quarter as customers continue to optimize their cloud spending.
Third, Alphabet is well positioned to benefit from the surge in interest in all things AI. It has built an AI-optimized infrastruct...
49
views
Welcome to a world where AI can value your home - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Welcome to a world where AI can value your home - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
10
views
How Does AI Affect Kids? Psychologists Weigh In - Decrypt
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How Does AI Affect Kids? Psychologists Weigh In - Decrypt
Critics and skeptics of artificial intelligence regularly claim it threatens human jobs. But given its immediately disruptive potential in education, the interests of one demographic deserves equal scrutiny in the era of AI: children. Even before the internet and mobile devices, kids were already susceptible to forming bonds with toys. The lifelike interactivity of AI chatbots now represent a seismic shift. "Children can form deep relationships with inanimate objects, like a teddy bear—now you have this tool that gives you exactly what you need—because AI is going to be amazing at figuring out what you want to hear and giving that to you," psychologist and executive coach Banu Kellner told Decrypt in an interview. Kellner is the founder of the SuperHuman Society, which engages experts from diverse backgrounds to address the challenges and opportunities posed by artificial intelligence and other emerging technologies. Like a scene from the 2022 sci-fi horror film M3GAN , Kellner said children ascribe human characteristics to AI products like toys and games and establish bonds that might surpass their human relationships. This bond represents a significant problem, however, because the child may come to rely on AI and not learn to navigate complex human relationships. The challenge, Kellner said, is ensuring that AI products help children cultivate life skills, particularly social skills, that foster human engagement rather than replacing those human interactions altogether. As AI develops, companies are racing to bring the technology to the masses, including in education and entertainment. Education-focused companies using artificial intelligence include Carnegie Learning, Cognii, and Kidsense. On Tuesday, kid-centric technology company Pinwheel announced the launch of the “kid-safe” PinwheelGPT , designed for children aged 7-12, which the company claims generates only age-appropriate responses. "We've created a fun and educational way for today's kids to get in on the exciting power and potential of ChatGPT and accessing information on the internet but with safe, age-appropriate guardrails," Pinwheel CEO and Founder Dane Witbeck said in a statement. "Not only can kids participate in the AI tech that's quickly transforming our world, but parents can be actively engaged in the conversation—by viewing and stepping in when or where it feels right—to provide guidance or clarification.” Last month, Khan Labs launched the beta version of its Khanmigo for the Khan Academy learning platform. Khanmigo uses a chatbot to interact with students mimicking historical figures, including U.S. President Abraham Lincoln, Warlord Genghis Khan, British Prime Minister Winston Churchill, and U.S. Civil War spy and Underground Railroad conductor Harriet Tubman. Kellner emphasized that the primary concern lies not just with AI but specifically with artificial intimacy, referring to the emerging AI products that simulate relationships—like AI friends or romantic partners—which are already available in the market and will only improve with time. In June, a former Google executive, Mo Gawdat, claimed that virtual and augmented reality would one day allow people to have virtual sexual experiences indistinguishable from reality. Gawdat said the next likely step would be sex with physical robots. “If we can convince you that this sex robot is alive or that sex experience in a virtual reality headset or an augmented reality headset is alive, it’s real, then there you go," he said. Adding to Kellner’s concern for future generations, especially in Western countries, is the so-called epidemic of loneliness that U.S. Surgeon General Vivek Murthy declared a public health crisis. To deal with this loneliness, even if against most experts’ advice, some people have turned to AI companions and chatbots like OpenAI's ChatGPT to address mental health concerns. “While AI chatbots can offer instant mental health support, they cannot replace the nuanced and empathetic care provided by human therapists,” Dr. LeMeita Smith, a Dallas-based clinical therapist with Aurora Behavioral Health System, told Decrypt in an email. “Relying solely on AI-driven mental health interventions may neglect the depth of emotional support required for certain conditions.” In July, a 21-year-old English man stood trial...
100
views
A fifth of US workers have jobs with 'high exposure' to AI - USA TODAY
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
A fifth of US workers have jobs with 'high exposure' to AI - USA TODAY
About one-in-five U.S. workers have jobs with key tasks that are more likely to be aided or replaced by AI, according to a recent report from Pew Research Center. The findings, based on an analysis of federal data, found that jobs that rely on analytical skills like critical thinking, writing, science and math tend to be "more exposed" to the emerging technology. Interestingly, workers in industries more exposed to AI are more likely to say they think it will help rather than hurt their jobs, according to a Pew survey. "Workers who are more familiar with AI seem to be seeing more benefits than harm," said Rakesh Kochhar, a senior researcher at the nonpartisan think tank who authored the report. The report noted that it’s unclear how many jobs are at risk due to AI, although some findings suggest jobs are already being lost to the technology. AI contributed to nearly 4,000 job cuts in May, according to a report from Challenger, Gray Christmas. Which jobs are most at-risk due to AI? U.S. jobs likely to have high, medium and low exposure to AI include: High exposure: Budget analysts Data entry keyers Tax preparers Technical writers Web developers Medium exposure: Chief executives Veterinarians Interior designers Fundraisers Sales managers Low exposure: Barbers Child care workers Dishwashers Firefighters Pipelayers In sum, about 19% of U.S. workers were in jobs most exposed to AI last year, while an even greater share (23%) had jobs considered least exposed. It's not clear how many jobs will be displaced by AI. A March report from Goldman Sachs found AI could substitute up to 25% of current work, with about two-thirds of jobs exposed to "some degree" of automation. But researchers note that displacements following the emergence of new technology have typically been offset by the creation of new jobs, with census data suggesting that about 60% of workers today are employed in jobs that didn't exist in 1940. Which employees are most at risk? Pew found that women, Asian, college-educated and higher-paid workers are more exposed to AI. Kochhar said this is because of the types of jobs held by different demographics: men tend to hold more jobs requiring physical labor like construction, for instance. "So at the moment, they have less exposure to AI," Kochhar said. "Which is not to say AI could not lead to smarter robots that can do it all, also. That's not something we looked into." According to the report: Workers with a bachelor’s degree (27%) are more likely than those with only a high school diploma(12%) to hold a job with the most exposure to AI. Women (21%) are more likely than men (17%) to have jobs with the most exposure to AI. Black (15%) and Hispanic (13%) workers are less exposed than Asian (24%) and white (20%) workers. Workers in the most exposed jobs last year earned $33 per hour on average, while jobs with the least amount of exposure earned $20 per hour. Despite warnings from AI company executives that the technology will take away jobs, many workers – especially those with jobs considered highly exposed to AI – are optimistic about AI's impact. Thirty-two percent of information and technology workers ‒ who work in an industry that is considered more exposed to AI ‒ say the technology will help more than hurt them compared with 11% who believe the opposite. Meanwhile, 14% of workers in hospitality, services and arts – a “less exposed” industry – think AI will help more than hurt.A greater share (17%) believe it's more likely to hurt them. "Where AI has penetrated at the moment, workers are finding it being more useful than hurtful or businesses are applying in what that is benefiting workers as opposed to replacing workers," Kochhar said. Overall, 16% of U.S. adults said they think AI will help more than hurt, while 15% said they thought it would hurt more than help.Thirty percent say it will help and hurt equally, and 32% said they were unsure.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
96
views
A New Frontier for Travel Scammers: A.I.-Generated Guidebooks - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
A New Frontier for Travel Scammers: A.I.-Generated Guidebooks - The New York Times
Aug. 5, 2023 Updated 9:42 a.m. ET In March, as she planned for an upcoming trip to France, Amy Kolsky, an experienced international traveler who lives in Bucks County, Pa., visited Amazon.com and typed in a few search terms: travel, guidebook, France. Titles from a handful of trusted brands appeared near the top of the page: Rick Steves, Fodor’s, Lonely Planet. Also among the top search results was the highly rated “France Travel Guide,” by Mike Steves, who, according to an Amazon author page, is a renowned travel writer. “I was immediately drawn by all the amazing reviews,” said Ms. Kolsky, 53, referring to what she saw at that time: universal raves and more than 100 five-star ratings. The guide promised itineraries and recommendations from locals. Its price tag — $16.99, compared with $25.49 for Rick Steves’s book on France — also caught Ms. Kolsky’s attention. She quickly ordered a paperback copy, printed by Amazon’s on-demand service. When it arrived, Ms. Kolsky was disappointed by its vague descriptions, repetitive text and lack of itineraries. “It seemed like the guy just went on the internet, copied a whole bunch of information from Wikipedia and just pasted it in,” she said. She returned it and left a scathing one-star review. Though she didn’t know it at the time, Ms. Kolsky had fallen victim to a new form of travel scam: shoddy guidebooks that appear to be compiled with the help of generative artificial intelligence, self-published and bolstered by sham reviews, that have proliferated in recent months on Amazon. The books are the result of a swirling mix of modern tools: A.I. apps that can produce text and fake portraits; websites with a seemingly endless array of stock photos and graphics; self-publishing platforms — like Amazon’s Kindle Direct Publishing — with few guardrails against the use of A.I.; and the ability to solicit, purchase and post phony online reviews, which runs counter to Amazon’s policies and may soon face increased regulation from the Federal Trade Commission. The use of these tools in tandem has allowed the books to rise near the top of Amazon search results and sometimes garner Amazon endorsements such as “#1 Travel Guide on Alaska.” A recent Amazon search for the phrase “Paris Travel Guide 2023,” for example, yielded dozens of guides with that exact title. One, whose author is listed as Stuart Hartley, boasts, ungrammatically, that it is “Everything you Need to Know Before Plan a Trip to Paris.” The book itself has no further information about the author or publisher. It also has no photographs or maps, though many of its competitors have art and photography easily traceable to stock-photo sites. More than 10 other guidebooks attributed to Stuart Hartley have appeared on Amazon in recent months that rely on the same cookie-cutter design and use similar promotional language. The Times also found similar books on a much broader range of topics, including cooking, programming, gardening, business, crafts, medicine, religion and mathematics, as well as self-help books and novels, among many other categories. Amazon declined to answer a series of detailed questions about the books. In a statement provided by email, Lindsay Hamilton, a spokeswoman for the company, said that Amazon is constantly evaluating emerging technologies. “All publishers in the store must adhere to our content guidelines,” she wrote. “We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to these guidelines.” The Times ran 35 passages from the Mike Steves book through an artificial intelligence detector from Originality.ai. The detector works by analyzing millions of records known to be created by A.I. and millions created by humans, and learning to recognize the differences between the two, explained Jonathan Gillham, the company’s founder. The detector assigns a score of between 0 and 100, based on the percentage chance its machine-learning model believes the content was A.I.-generated. All 35 passages scored a perfect 100, meaning they were almost certainly produced by A.I. The company claims that the version of its detector used by The Times catches more than 99 percent of A.I. passages and mistakes human text for A.I. on just under 1.6 percent of tests. The Time...
47
views
Don't quit your day job: Generative AI and the end of programming - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Don't quit your day job: Generative AI and the end of programming - VentureBeat
August 6, 2023 10:10 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here There’s a lot of angst about software developers “losing their jobs” to AI, being replaced by a more intelligent version of ChatGPT, GitHub’s Copilot, Google’s foundation model Codey, or something similar. AI startup founder Matt Welsh has been talking and writing about the end of programming. He’s asking whether large language models (LLMs) eliminate programming as we know it, and he’s excited that the answer is “yes”: Eventually, if not in the immediate future. But what does this mean in practice? What does this mean for people who earn their living from writing software? The value in new programming skills Some companies will certainly value AI as a tool for replacing human effort rather than for augmenting human capabilities. Programmers who work for those companies risk losing their jobs to AI. If you work for one of those organizations, I’m sorry for you, but it’s really an opportunity. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now Despite the well-publicized layoffs, the job market for programmers is great, it’s likely to remain great, and you’re probably better off finding an employer who doesn’t see you as an expense to be minimized. It’s time to learn some new skills and find an employer who really values you. But the number of programmers who are “replaced by AI” will be small. Here’s why, and here’s how the use of AI will change the discipline as a whole. I did a very non-scientific study of the amount of time programmers actually spend writing code. OK, I just typed “How much of a software developer’s time is spent coding” into the search bar and looked at the top few articles, which gave percentages ranging from 10% to 40%. My own sense, from talking to and observing many people over the years, falls into the lower end of that range: 15% to 20%. Time for “the rest of the job” ChatGPT won’t make the 20% of time programmers spend writing code disappear completely. You still have to write prompts, and we’re all in the process of learning that if you want ChatGPT to do a good job, the prompts have to be very detailed. How much time and effort does that save? I’ve seen estimates as high as 80%, but I don’t believe them; I think 25% to 50% is more reasonable. If 20% of your time is spent coding, and AI-based code generation makes you 50% more efficient, then you’re really only getting about 10% of your time back. You can use it to produce more code — I’ve yet to see a programmer who was underworked, or who wasn’t up against an impossible delivery date. Or you can spend more time on the “rest of the job,” the 80% of your time that wasn’t spent writing code. Some of that time is spent in pointless meetings, but much of “the rest of the job” is understanding the user’s needs, designing, testing, debugging, reviewing code, finding out what the user really needs (that they didn’t tell you the first time), refining the design, building an effective user interface, auditing for security and so on. It’s a lengthy list. Programmers needed: AI lacks design skills That “rest of the job” (particularly the “user’s needs” part) is something our industry has never been particularly good at. Design — of the software itself, the user interfaces and the data representation — is certainly not going away and isn’t something the current generation of AI is very good at. We’ve come a long way, but I don’t know anyone who hasn’t had to rescue code that was best described as a “seething mass of bits.” Testing and debugging — well, if you’ve played with ChatGPT much, you know that testing and debugging won’t disappear. AIs generate incorrect code, and that’s not going to end soon. Security auditing will only become more important, not less; it’s very hard for a programmer to understand the security implications of code they didn’t write. Spending more time on these things — and leaving the details of pushing out lines of code to an AI — will surely improve the quality of the products we deliver. Prompting a different form of programming Now, let’s take a really long...
70
views
Massachusetts launches probe into AI in securities industry - Cointelegraph
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Massachusetts launches probe into AI in securities industry - Cointelegraph
Massachusetts securities regulators seek to ensure that AI applications in the securities industry will not harm the interests of their users. 2115 Total views 10 Total shares Securities regulators in the United States state of Massachusetts have launched an investigation into the use of artificial intelligence (AI) in the securities industry after becoming increasingly concerned about the implications of the new technology. On Aug. 3, Massachusetts Secretary of the Commonwealth William Galvin officially announced an investigation into how firms use AI in their interactions with Massachusetts investors. On Aug. 2, the commonwealth’s securities division sent letters of inquiry to a number of registered and unregistered firms known to be using or developing AI for business purposes in the securities industry. The authority sought data on the matter in which companies may be using AI in their activities and operations. The firms included in the investigatory sweep have been given until Aug. 16, 2023, to respond to the regulator’s inquiries. “Of particular interest to Galvin are the supervisory procedures that firms have in place regarding artificial intelligence, and whether those systems ensure that the AI will not put the interests of the firm ahead of the interests of their clients,” the regulator said. For those firms that have already deployed AI, the securities division will also be assessing the disclosure policies. According to Galvin, U.S. securities regulators have a crucial role to play when it comes to AI and its possible implications for investor protection. He added: “If deployed without the guardrails necessary to ensure proper disclosure and consideration of conflicts, I am concerned that this technology could result in harm to investors.” Additionally, Massachusetts securities regulators are also questioning certain companies about any marketing materials provided to investors that may have been created using AI. The Massachusetts securities division did not immediately respond to Cointelegraph’s request for comment. AI has increasingly become a global regulatory concern in recent years due to the rapid growth of the technology. In the second fiscal quarter of 2023, mentions of AI in earnings calls of major tech companies skyrocketed. For example, companies like Intel mentioned AI nearly 300% more in Q2 2023 than in its first-quarter call. Related: SEC’s Gary Gensler believes AI can strengthen its enforcement regime But some major regulators have been alarmed by potential risks coming with AI for several years. For example, the Financial Stability Board (FSB)raised concerns about AI and machine learning in financial services back in 2017. The FSB specifically argued that AI and machine learning services were increasingly being offered by a small handful of large technology firms. “There is the potential for natural monopolies or oligopolies,” the FSB wrote, adding that competition issues could be translated into financial stability risks. “If one of them were to face major disruption or insolvency, there would be major repercussions in the world of finance,” the regulators argued at the time. Magazine: AI Eye: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
20
views
Air Force pulls off first AI flight in pilotless plane - Stars and Stripes
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Air Force pulls off first AI flight in pilotless plane - Stars and Stripes
An XQ-58A Valkyrie unmanned airplane takes off from the U.S. Army Yuma Proving Ground in Arizona on Dec. 9, 2020. The Air Force recently announced that the Valkyrie drone was flown by artificial intelligence for the first time. (Joshua King/U.S. Air Force) Air Force researchers are touting the achievement of the first unmanned flight using artificial intelligence algorithms after a successful three-hour sortie by an XQ-58A Valkyrie. The flight took place at Florida’s Eglin Air Force Base on July 25, according to a statement issued Thursday by the Air Force Research Lab, which developed the unmanned plane in partnership with Kratos. The AI algorithms used in the flight were created by the lab and honed through millions of hours of simulations, the statement said. An F-15E Strike Eagle from the 40th Flight Test Squadron at Eglin Air Force Base, Fla., accompanies an XQ-58A Valkyrie flown by artificial intelligence, in an undated photo. The Air Force recently announced that the Valkyrie drone was flown by AI for the first time. (U.S. Air Force) “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions,” said Brig. Gen. Scott Cain, the research lab commander. “We need the coordinated efforts of our government, academia and industry partners to keep pace.” The Valkyrie is a reusable unmanned plane that was designed to be far less costly to operate than traditional counterparts, whether they have a pilot or not, according to the Air Force Research Lab website. The July 25 flight put a capstone on a multiyear partnership that began with the Skyborg Vanguard program, the statement said. The Valkyrie used in the flight arrived at Eglin last year. It is rocket-launched off a rail system and controlled from a ground station or airborne fighter. An onboard computer system can determine the best flight path and throttle settings to comply with commands, the Air Force said.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
99
views
Can search engines detect AI content? - Search Engine Land
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Can search engines detect AI content? - Search Engine Land
The AI tool explosion in the past year has dramatically impacted digital marketers, especially those in SEO. Given content creation’s time-consuming and costly nature, marketers have turned to AI for assistance, yielding mixed results Ethical issues notwithstanding, one question that repeatedly surfaces is, “Can search engines detect my AI content?” The question is deemed particularly important because if the answer is “no,” it invalidates many other questions about whether and how AI should be used. A long history of machine-generated content While the frequency of machine-generated or -assisted content creation is unprecedented, it’s not entirely new and is not always negative. Breaking stories first is imperative for news websites, and they have long utilized data from various sources, such as stock markets and seismometers, to speed up content creation. For instance, it’s factually correct to publish a robot article that says: “A [magnitude] earthquake was detected in [location, city] at [time]/[date] this morning, the first earthquake since [date of last event]. More news to follow.” Updates like this are also helpful to the end reader who need to get this information as quickly as possible. At the other end of the spectrum, we’ve seen many “blackhat” implementations of machine-generated content. Google has condemned using Markov chains to generate text to low-effort content spinning for many years, under the banner of “automatically generated pages that provide no added value.” What is particularly interesting, and mostly a point of confusion or a gray area for some, is the meaning of “no added value.” How can LLMs add value? The popularity of AI content soared due to the attention garnered by GPTx large language models (LLMs) and the fine-tuned AI chatbot, ChatGPT, which improved conversational interaction. Without delving into technical details, there are a couple of important points to consider about these tools: The generated text is based on a probability distribution For instance, if you write, “Being an SEO is fun because…,” the LLM is looking at all of the tokens and trying to calculate the next most likely word based on its training set. At a stretch, you can think of it as a really advanced version of your phone’s predictive text. ChatGPT is a type of generative artificial intelligence This means that the output is not predictable. There is a randomized element, and it may respond differently to the same prompt. When you appreciate these two points, it becomes clear that tools like ChatGPT do not have any traditional knowledge or “know” anything. This shortcoming is the basis for all the errors, or “hallucinations” as they are called. Numerous documented outputs demonstrate how this approach can generate incorrect results and cause ChatGPT to contradict itself repeatedly. This raises serious doubts about the consistency of “adding value” with AI-written text, given the possibility of frequent hallucinations. The root cause lies in how LLMs generate text, which won’t be easily resolved without a new approach. This is a vital consideration, especially for Your Money, Your Life (YMYL) topics, which can materially harm people’s finances or life if inaccurate. Major publications like Men’s Health and CNET were caught publishing factually incorrect AI-generated information this year, highlighting the concern. Publishers are not alone with this issue, as Google has had difficulty reining in its Search Generative Experience (SGE) content with YMYL content. Despite Google stating it would be careful with generated answers and going as far as to specifically give an example of “won’t show an answer to a question about giving a child Tylenol because it is in the medical space,” the SGE would demonstrably do this by simply asking it the question. Get the daily newsletter search marketers rely on. Google’s SGE and MUM It's clear Google believes there is a place for machine-generated content to answer users’ queries. Google has hinted at this since May 2021, when they announced MUM, their Multitask Unified Model. One challenge MUM set out to tackle was based on the data that people issue eight queries on average for complex tasks. In an initial query, the searcher will learn some additional information, prompting related searches and surfac...
84
views
AI Won't Replace Humans — But Humans With AI Will Replace Humans Without AI - HBR.org Daily
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI Won't Replace Humans — But Humans With AI Will Replace Humans Without AI - HBR.org Daily
The first step business leaders must take is to experiment, create sandboxes, run internal bootcamps, and develop AI use cases not just for technology workers, but for all employees. August 04, 2023 Tweet Post Share Annotate Save Print Karim Lakhani is a professor at Harvard Business School who specializes in workplace technology and particularly AI. He’s done pioneering work in identifying how digital transformation has remade the world of business, and he’s the co-author of the 2020 book Competing in the Age of AI. Customers will expect AI-enhanced experiences with companies, he says, so business leaders must experiment, create sandboxes, run internal bootcamps, and develop AI use cases not just for technology workers, but for all employees. Change and change management are skills that are no longer optional for modern organizations. Just as the internet has drastically lowered the cost of information transmission, AI will lower the cost of cognition. That’s according to Harvard Business School professor Karim Lakhani, who has been studying AI and machine learning in the workplace for years. As the public comes to expect companies that deliver seamless, AI-enhanced experiences and transactions, leaders need to embrace the technology, learn to harness its potential, and develop use cases for their businesses. “The places where you can apply it?” he says. “Well, where do you apply thinking?”
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
8
views
Artificial intelligence flies XQ-58A Valkyrie drone - Defense News
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Artificial intelligence flies XQ-58A Valkyrie drone - Defense News
An XQ-58A Valkyrie drone launches at Yuma Proving Ground, Ariz., in 2020 to demonstrate the transfer of data among aircraft. (Staff Sgt. Joshua King/U.S. Air Force) WASHINGTON — Artificial intelligence software successfully flew an XQ-58A Valkyrie drone, the Air Force Research Laboratory announced Aug. 2. The U.S. lab led the three-hour sortie on July 25 with test units at the Eglin Test and Training Complex in Florida. The flight followed two years of work and a partnership with Skyborg Vanguard, a team made up of personnel from the lab and the Air Force Life Cycle Management Center with the intent of creating unmanned fighter aircraft. “This sortie officially enables the ability to develop [artificial intelligence and machine learning] agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to the CCA program,” said Col. Tucker Hamilton, the chief of AI test and operations with the Air Force. The CCA program, or collaborative combat aircraft, was designed to create combat drones that can operate with piloted aircraft. The lab’s Autonomous Air Combat Operations team created algorithms for the flight that took millions of hours to mature in simulations, during sorties with the X-62 VISTA experimental aircraft, while working with the XQ-58A, and during ground test operations, according to the announcement. Previous flights of the XQ-58A Valkyrie have supported the Air Force’s effort into loyal wingmen research. Kratos Defense and Security Solutions produces the drone. The Air Force Research Lab is the service’s primary scientific research and development center responsible for the discovery, development and integration of cost-effective warfighting technologies for the country’s air, space and cyberspace forces. “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions,” Brig. Gen. Scott Cain, the lab’s commander, said in the announcement. “AI, Autonomous Operations, and Human-Machine Teaming continue to evolve at an unprecedented pace and we need the coordinated efforts of our government, academia, and industry partners to keep pace.” Georgina DiNardo is an editorial fellow for Military Times and Defense News and a recent graduate of American University, specializing in journalism, psychology, and photography in Washington, D.C.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
215
views
When A.I. Lies About You, There’s Little Recourse - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
When A.I. Lies About You, There’s Little Recourse - The New York Times
People have little protection or recourse when the technology creates and spreads falsehoods about them. Marietje Schaake, a former member of the European Parliament and a technology expert, was falsely labeled a terrorist last year by BlenderBot 3, an A.I. chatbot developed by Meta. Credit... Ilvy Njiokiktjien for The New York Times Aug. 3, 2023 Updated 11:27 a.m. ET Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments. Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true. While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background. “I’ve never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that’s happened,” Ms. Schaake said in an interview. “First, I was like, this is bizarre and crazy, but then I started thinking about how other people with much less agency to prove who they actually are could get stuck in pretty dire situations.” Artificial intelligence’s struggles with accuracy are now well documented. The list of falsehoods and fabrications produced by the technology includes fake legal decisions that disrupted a court case, a pseudo-historical image of a 20-foot-tall monster standing next to two humans, even sham scientific papers. In its first public demonstration, Google’s Bard chatbot flubbed a question about the James Webb Space Telescope. The harm is often minimal, involving easily disproved hallucinatory hiccups. Sometimes, however, the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse. Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist. One legal scholar described on his website how OpenAI’s ChatGPT chatbot linked him to a sexual harassment claim that he said had never been made, which supposedly took place on a trip that he had never taken for a school where he was not employed, citing a nonexistent newspaper article as evidence. High school students in New York created a deepfake, or manipulated, video of a local principal that portrayed him in a racist, profanity-laced rant. A.I. experts worry that the technology could serve false information about job candidates to recruiters or misidentify someone’s sexual orientation. Ms. Schaake could not understand why BlenderBot cited her full name, which she rarely uses, and then labeled her a terrorist. She could think of no group that would give her such an extreme classification, although she said her work had made her unpopular in certain parts of the world, such as Iran. Later updates to BlenderBot seemed to fix the issue for Ms. Schaake. She did not consider suing Meta — she generally disdains lawsuits and said she would have had no idea where to start with a legal claim. Meta, which closed the BlenderBot project in June, said in a statement that the research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake. Image The BlenderBot 3 exchange that labeled Ms. Schaake a terrorist. Meta said the A.I. model had combined two unrelated pieces of information to create an inaccurate sentence about her. Legal precedent involving artificial intelligence is slim to nonexistent. The few laws that currently govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court. An aerospace professor filed a defamation lawsuit against Microsoft this summer, accusing the company’s Bing chatbot of conflating his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on th...
77
views
Worldcoin: Should you let Sam Altman scan your eyeballs for WLD? - Cointelegraph
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Worldcoin: Should you let Sam Altman scan your eyeballs for WLD? - Cointelegraph
OpenAI founder Sam Altman launched Worldcoin in July, a cryptocurrency token offered to people willing to share their biometric data by scanning their eyeballs with an Orb, seeking to link real-world identity with decentralized blockchain identity. Many aspects of the project embody the dystopian nightmare that the cypherpunks created Bitcoin (BTC) and other cryptocurrencies in order to avoid. The tokenomics for Worldcoin’s WLD token are such that only 1% of the total value is floating right now. That kind of overhang is unprecedented, even in the wild world of crypto distributions. The token won’t long hold its value unless the entire world puts their eyeballs into the Orb. Unlike Bitcoin or Ethereum — which grew organically through user adoption and utility — the project is all or nothing. Either it is the only solution for on-chain identity or it will be worthless. Altman’s recent testimony advocating for a regulatory moat for artificial intelligence (AI) to protect OpenAI’s dominance is suggestive of his business ethics. The change in business entity form for his organization from nonprofit to for-profit is also suggestive of the stickiness of his public promises. Related: Worldcoin is making reality look a lot like Black Mirror Claims that privacy and biometrics are protected by Worldcoin are unsubstantiated assertions, and they cannot be trusted until Worldcoin — along with its mysterious Orb — become free and open source. This is not Altman’s approach to development and would probably threaten the integrity of Orb identity verification anyway. Sybil attacks and spam are a problem in crypto, and they can lead to market manipulation. Altman’s AI revolution, as impressive and useful as it is, will make that worse. But this is not the answer. Crypto privacy should overcome the urge to ape into the next token. HERE WE GO FOLKS: Hundreds of youth voluntarily line-up to have their eyeballs scanned with a Worldcoin orb to get their new digital ID with “free money” Worldcoins in their new digital wallet. This is exactly how #CBDC will be rolled out globally… pic.twitter.com/whWgxdg7lm — Patrick Henningsen (@21WIRE) July 26, 2023 Worldcoin is partly a response to Altman’s vision for AI. He anticipates his AI project will cause massive disruption and allow AI tools to pose as humans, so his response is to scan the eyeballs of everyone in the world. Altman asks us to trust him with that biometric information, and he will give us a few WLD tokens — which currently don’t actually do anything, so it’s the biometric equivalent of buying our identity registration with an on-chain version of the beads used to buy Manhattan. Imagine the ego needed to assert that you will change the world with AI but warn your AI revolution will wreak havoc — yet, don’t worry, the whole world just needs to also scan their eyeballs into his secret Orb to fix it. This depth of hubris would make Soviet-era commissars blush. Related: It’s time for the SEC to settle with Coinbase and Ripple We are told not to worry because Worldcoin will decentralize. We were also told OpenAI was a nonprofit research organization, but we know how that promise changed when the lure of Microsoft’s $10 billion came calling. And even if Worldcoin decentralized tomorrow, remember that Worldcoin is built on the Optimism layer 2, and that isn’t decentralized either. In fairness, some of the top minds in zero-knowledge proof cryptography are working on the project. I don’t doubt their commitment or their tech, and we can appreciate the search for identification solutions within cryptocurrency to build on-chain identity while preserving privacy. That doesn’t mean buying into a solution from Altman — for a problem he’s creating with OpenAI — that works by giving his centralized project control over my identity. Ethereum founder Vitalik Buterin recently opined on Worldcoin, and he was mostly right about the trade-offs — from advantages to worst-case scenarios. But the one area he missed was the economics of wealth. Like some other crypto developers, he buys into an overestimated bias of wealth effects on the economy. That’s an understandable bias for an Ethereum developer who is already wealthy. However, for the rest of us, the market’s regular pricing system is more trustworthy than Altman...
104
views