AI images show what famous celebrity exes' kids would look like - Insider
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI images show what famous celebrity exes' kids would look like - Insider
Brad Pitt and Jennifer Aniston split in 2005, but what if they hadn't? Brenda Chase Online USA Inc.; Courtesy of Jeremy Pomeroy AI images of celebrity children who don't actually exist have gone viral. Jeremy Pomeroy created images that show what famous former couples' families would look like. Brad Pitt and Jennifer Aniston's hypothetical kids are included, as are Kristen Stewart and Robert Pattinson's. Loading Something is loading. Thanks for signing up! Access your favorite topics in a personalized feed while you're on the go. An artist is using artificial intelligence and Photoshop to show what the families of famous couples (both real and imagined) might have looked like — and people can't get enough of them. Jeremy Pomeroy, the founder and creative director of marketing agency Total Marketing Australia, decided to give fans a glimpse into alternate universes where celeb power couples like Brad Pitt and Jennifer Aniston stayed together and had kids. The series of photos, which he's dubbed his "Celebrity What If?" series, has gone viral since Pomeroy shared some of his images on Instagram. Check out Pomeroy's work below and learn more about went into his project. Pomeroy told Insider in an email interview that he used "a combination of AI image generators and Photoshop" to create his work. An AI-generated image showing what the children of Jennifer Aniston and Brad Pitt might look like. Courtesy of Jeremy Pomeroy According to Pomeroy, the amount of time to complete each artwork varied, taking anywhere from several hours to days to complete a single piece. He wanted to use technology "in a creative and impactful way" to provoke conversation. An AI-generated image showing what the children of Zac Efron and Vanessa Hudgens might look like. Courtesy of Jeremy Pomeroy Pomeroy chose an array of couples to feature in his series, which includes some exes with massive fandoms behind them — like Jelena (aka, Justin Bieber and Selena Gomez), who split for good in 2018. An AI-generated image showing what the children of Justin Bieber and Selena Gomez might look like. Courtesy of Jeremy Pomeroy "By imagining what the families of celebrity couples would look like if they had stayed together, I sought to engage viewers in thought-provoking narratives and spark conversations about relationships, identity, and the dynamics of fame," Pomeroy told Insider via email. An AI-generated image showing what the children of Nicki Minaj and Drake might look like. Courtesy of Jeremy Pomeroy Pomeroy said he picked the couples based on their significance in pop culture. Those included real-life relationships that ended decades ago, like Britney Spears and Justin Timberlake's. An AI-generated image showing what the children of Britney Spears and Justin Timberlake might look like. Courtesy of Jeremy Pomeroy Rachel McAdams and Ryan Gosling, another wildly popular former couple who split in 2007, are also featured. An AI-generated image showing what the children of Ryan Gosling and Rachel McAdams might look like. Courtesy of Jeremy Pomeroy Some of the AI portraits are also stylized as if the subjects are posing in a bygone era, like this one of Winona Ryder and Johnny Depp's hypothetical family, done in a Victorian style. An AI-generated image showing what the children of Johnny Depp and Winona Ryder might look like. Courtesy of Jeremy Pomeroy Pomeroy told Insider this was "a creative choice to enhance the storytelling and evoke a sense of nostalgia or historical context." "I wanted to create a visual juxtaposition and explore alternate realities," he said. Taylor Swift and Harry Styles, who briefly dated in late 2012, appear to be living on a prairie with their "What If?" family. An AI-generated image showing what the children of Taylor Swift and Harry Styles might look like. Courtesy of Jeremy Pomeroy Pomeroy's background is in graphic design, but he's painted and illustrated his whole life. An AI-generated image showing what the children of Ariana Grande and Pete Davidson might look like. Courtesy of Jeremy Pomeroy In addition to celebrities who publicly dated and split, Pomeroy decided to feature some other famous pairs who had alleged relationships, like Marilyn Monroe and John F. Kennedy. An AI-generated image showing what the children of John F. Kennedy and M...
337
views
Plagiarism Engine: Google's Content-Swiping AI Could Break the Internet - Tom's Hardware
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Plagiarism Engine: Google's Content-Swiping AI Could Break the Internet - Tom's Hardware
Home News (Image credit: Shutterstock) Search has always been the Internet’s most important utility. Before Google became dominant, there were many contenders for the search throne, from Altavista to Lycos, Excite, Zap, Yahoo (mainly as a directory) and even Ask Jeeves. The idea behind the World Wide Web is that there’s power in having a nearly infinite number of voices. But with millions of publications and billions of web pages, it would be impossible to find all the information you want without search. Google succeeded because it offered the best quality results, loaded quickly and had less cruft on the page than any of its competitors. Now, having taken over 91 percent of the search market , the company is testing a major change to its interface that replaces the chorus of Internet voices with its own robotic lounge singer. Instead of highlighting links to content from expert humans, the “Search Generative Experience” (SGE) uses an AI plagiarism engine that grabs facts and snippets of text from a variety of sites, cobbles them together (often word-for-word) and passes off the work as its creation. If Google makes SGE the default mode for search, the company will seriously damage if not destroy the open web while providing a horrible user experience. A couple of weeks ago, Google made SGE available to the public in a limited beta (you can sign up here ). If you are in the beta program like I am, you will see what the company seems to have planned for the near future: a search results page where answers and advice from Google take up the entire first screen, and you have to scroll way below the fold to see the first organic search result. For example, when I searched “best bicycle,” Google’s SGE answer, combined with its shopping links and other cruft took up the first 1,360 vertical pixels of the display before I could see the first actual search result. (Image credit: Tom's Hardware) For its part, Google says that it’s just “experimenting,” and may make some changes before rolling SGE out to everyone as a default experience. The company says that it wants to continue driving traffic offsite. “We’re putting websites front and center in SGE, designing the experience to highlight and drive attention to content from across the web,” a Google spokesperson told me. “SGE is starting as an experiment in Search Labs, and getting feedback from people is helping us improve the experience and understand how generative AI can be helpful in information journeys. The experiences that ultimately come to Search will likely look different from the experiments you see in Search Labs. As we experiment with new LLM-powered capabilities in Search, we'll continue to prioritize approaches that will drive valuable traffic to a wide range of creators." By “putting websites front-and-center,” Google is referring to the block of three related-link thumbnails that sometimes (but not always) appear to the right of its SGE answer. These are a fig leaf to publishers, but they’re not always the best resources (they don’t match the top organic results) and few people are going to click them, having gotten their “answer” in the SGE text. (Image credit: Tom's Hardware) For example, when I searched for “Best CPU,” the related links were from the sites Maketecheasier.com, Nanoreview and MacPaw. None of these sites is even on the first page of organic results for “Best CPU” and for good reason. They aren’t leading authorities in the field and the linked articles don’t even provide lists of the best CPUs. The MacPaw article is about how to choose the best processor for your MacBook, a topic that does not match the intent of someone searching for “best CPU,” as those folks are almost certainly looking for a desktop PC processor. A Plagiarism Stew Even worse, the answers in Google’s SGE boxes are frequently plagiarized, often word-for-word, from the related links. Depending on what you search for, you may find a paragraph taken from just one source or get a whole bunch of sentences and factoids from different articles mashed together into a plagiarism stew. When I searched “which is faster the Ryzen 7 7800X3D or the Core i9-13900K,” the Google SGE grabbed an exact phrase from our Tom’s Hardware article comparing the two CPUs , writing “The Ryzen 7 7800...
40
views
From Thought to Text: AI Converts Silent Speech into Written Words - Neuroscience News
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
From Thought to Text: AI Converts Silent Speech into Written Words - Neuroscience News
Summary: A novel artificial intelligence system, the semantic decoder, can translate brain activity into continuous text. The system could revolutionize communication for people unable to speak due to conditions like stroke. This non-invasive approach uses fMRI scanner data, turning thoughts into text without requiring any surgical implants. While not perfect, this AI system successfully captures the essence of a person’s thoughts half of the time. Key Facts: The semantic decoder AI was developed by researchers at The University of Texas at Austin. It works based on a transformer model similar to the ones that power Open AI’s ChatGPT and Google’s Bard. The system has potential for use with more portable brain-imaging systems, like functional near-infrared spectroscopy (fNIRS). Source: UT Austin A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again. The study,published in the journalNature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. Credit: Neuroscience News The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard. Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone. “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.” The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words. For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’” Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable. “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.” Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Credit: Neuroscience News In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos whi...
137
views
Cyberpunk 2077 Phantom Liberty 'totally rebuilds' AI, skill trees, loot and more VGC - Video...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Cyberpunk 2077 Phantom Liberty 'totally rebuilds' AI, skill trees, loot and more VGC - Video Games Chronicle
Cyberpunk 2077‘s upcoming Phantom Liberty DLC will entirely overhaul the way the game is played, VGC has been told. In an interview with creative director Pawel Sasko and quest designer Despoina Anetaki, we were told that “all the core mains systems” of the game have been “redone or updated in a major way”. “The biggest ones are the perks and skill trees, which have been rebuilt completely,” Sasko told us. “We’ve [also] added vehicle combat which enables new car chases. “We’ve also greatly expanded AI and complete redone the police system, which is rebuilt from the ground up and now has multiple levels with multiple archetypes of enemies who will chase you – it’s also different in Dog Town compared to Night City. “We’ve also redone the loop and whole progression of the game – the difficulty curve is different, the tiers and drops of loot is different, the archetypes of enemies have been redone for more variety. “Those are the core things that we’ve changed, and if you look into it there are very few systems that we didn’t change or update. Even the UI and UX have been greatly updated.” Notice: To display this embed please allow the use of Functional Cookies in Cookie Preferences. Sasko also said that the development team is trying to implement some of these overhauls to the base game too, so that anyone who doesn’t buy the Phantom Liberty expansion will still be able to benefit to some extent. “[Original game owners] will get some of the new features… we’re still discussing how that’s going to work, and when it’s going to be, and how to solve it,” he explained. “Our goal is to provide them to players of the base game as well. The core systems should be there – that’s our intention.” The Cyberpunk 2077: Phantom Liberty release date is September 26, CD Projekt has confirmed.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
52
views
3 AI Stocks to Buy Now and Hold Forever - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
3 AI Stocks to Buy Now and Hold Forever - The Motley Fool
Market researchers at PwC estimate that the greatest economic gains from artificial intelligence (AI) will take place in China and North America, and this could equate to over $10 trillion in economic value. PwC not only sees AI enhancing products but also making them more affordable, which could stimulate consumer demand.
Many companies across different industries will be beneficiaries of this technology, but a team of Motley Fool contributors sees a particularly bright future for JD.com (JD -1.77%), Walmart (WMT 0.60%), and Amazon (AMZN -0.66%). Let's see why they believe now is the right time to buy these top stocks. AI is boosting this e-commerce giant's profits John Ballard (JD.com): JD is one of the leading e-commerce platforms in China with over $150 billion in trailing-12-month revenue. However, the stock is trading down 66% from its previous highs. The weak economy in China pressured revenue growth in 2022, but considering the improvements JD is making to grow margins using AI, the stock could be a bargain.
AI helps JD anticipate demand and optimize its inventory, which has reduced inventory turnover by 30 days. So it's no surprise to see profits improving. While revenue grew just 1.4% year over year in the first quarter, adjusted net income nearly doubled to $1.1 billion.
During the first-quarter earnings call, CEO Lei Xu suggested further profit growth awaits, while also pointing to the company's advantages that will allow it to deliver the returns investors expect. "On top of our continuous efforts to optimize costs and efficiency, we are committed to providing users with best-in-class product offerings, prices and shopping experiences, addressing user demand on all fronts, including superior selection, speed, quality, and value," he said.
The stock's low price-to-sales ratio of 0.38 appears to significantly undervalue this leading e-commerce platform. AI helps this retail heavyweight manage inventory Jeremy Bowman (Walmart):Walmart probably isn't the first company that comes to mind when you think of an AI stock. The retail giant has a stodgy reputation, but it has evolved with the times, embracing omnichannel retail and harnessing technology to drive its advertising business and third-party marketplace.
When it comes to AI, Walmart also has some big ideas. The company has more data on shopper habits than practically any other company, giving it an advantage with artificial intelligence -- and it gets even better as the size of the dataset increases. The company uses machine-learning algorithms to reduce out-of-stock items, and its Sam's Club warehouses use autonomous floor scrubbers with attached inventory scanners that transmit current inventory levels. The scanners are capable of seeing items that are hidden toward the back of the shelves. Walmart has also applied AI to its shopping app, so it can recognize when a customer last ordered a product and whether it's still appropriate. If it's something like diapers, for example, the customer may need a new size. Walmart also has a research arm, called Intelligent Retail Lab, that's developed AI-enabled cameras, interactive displays, and a large data center. The ability to invest in technology like artificial intelligence also plays into Walmart's competitive advantages as it has the economies of scale to leverage such investments and differentiate itself further from smaller retailers.
As Walmart continues to diversify into revenue streams like e-commerce and advertising, expect to see more such investments in artificial intelligence. Though these moves will mostly be happening behind the scenes, Walmart has the capital and scale to leverage the new technology in a way that few other retailers can. Amazon sees AI opportunities in all of its businesses Jennifer Saibil (Amazon): Amazon may not have started the AI craze, but it has invested in its own AI functions for decades. It uses AI to recommend products to customers, to get products to customers faster, and more.
In his 2022 shareholder letter, CEO Andy Jassy announced that Amazon was working on its own large-language models for Amazon Web Services (AWS). Large-language models power generative AI like ChatGPT, and Amazon is investing in its AI for AWS customers by generating code and other services.
It also uses AI to power Alexa...
27
views
Can a chatbot preach a good sermon? Hundreds attend church service generated by ChatGPT to find...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Can a chatbot preach a good sermon? Hundreds attend church service generated by ChatGPT to find out - The Associated Press
FUERTH, Germany (AP) — The artificial intelligence chatbot asked the believers in the fully packed St. Paul’s church in the Bavarian town of Fuerth to rise from the pews and praise the Lord. The ChatGPT chatbot, personified by an avatar of a bearded Black man on a huge screen above the altar, then began preaching to the more than 300 people who had shown up on Friday morning for an experimental Lutheran church service almost entirely generated by AI. “Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany,” the avatar said with an expressionless face and monotonous voice. The 40-minute service — including the sermon, prayers and music — was created by ChatGPT and Jonas Simmerlein, a theologian and philosopher from the University of Vienna. “I conceived this service — but actually I rather accompanied it, because I would say about 98% comes from the machine,” the 29-year-old scholar told The Associated Press. The AI church service was one of hundreds of events at the convention of Protestants in the Bavarian towns of Nuremberg and the neighboring Fuerth, and it drew such immense interest that people formed a long queue outside the 19th-century, neo-Gothic building an hour before it began. The convention itself — Deutscher Evangelischer Kirchentag in German — takes place every two years in the summer at a different place in Germany and draws tens of thousands of believers to pray, sing and discuss their faith. They also talk about current world affairs and look for solutions to key issues, which this year included global warming, the war in Ukraine — and artificial intelligence. This year’s gathering is taking place from Wednesday to Sunday under the motto “Now is the time.” That slogan was one of the sentences Simmerlein fed ChatGPT when he asked the chatbot to develop the sermon. “I told the artificial intelligence ‘We are at the church congress, you are a preacher … what would a church service look like?’” Simmerlein said. He also asked for psalms to be included, as well as prayers and a blessing at the end. “You end up with a pretty solid church service,” Simmerlein said, sounding almost surprised by the success of his experiment. Indeed, the believers in the church listened attentively as the artificial intelligence preached about leaving the past behind, focusing on the challenges of the present, overcoming fear of death, and never losing trust in Jesus Christ. The entire service was “led” by four different avatars on the screen, two young women, and two young men. At times, the AI-generated avatar inadvertently drew laughter as when it used platitudes and told the churchgoers with a deadpan expression that in order “to keep our faith, we must pray and go to church regularly.” Some people enthusiastically videotaped the event with their cell phones, while others looked on more critically and refused to speak along loudly during The Lord’s Prayer. Heiderose Schmidt, a 54-year-old who works in IT, said she was excited and curious when the service started but found it increasingly off-putting as it went along. “There was no heart and no soul,” she said. “The avatars showed no emotions at all, had no body language and were talking so fast and monotonously that it was very hard for me to concentrate on what they said.” “But maybe it is different for the younger generation who grew up with all of this,” Schmidt added. Marc Jansen, a 31-year-old Lutheran pastor from Troisdorf near the western German city of Cologne, brought a group of teenagers from his congregation to St. Paul. He was more impressed by the experiment. “I had actually imagined it to be worse. But I was positively surprised how well it worked. Also the language of the AI worked well, even though it was still a bit bumpy at times,” Jansen said. What the young pastor missed, however, was any kind of emotion or spirituality, which he says is essential when he writes his own sermons. Anna Puzio, 28, a researcher on the ethics of technology from the University of Twente in The Netherlands, also attended the service. She said she sees a lot of opportunities in the use of AI in religion — such as making religious services more easi...
139
views
1
comment
Opinion Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Opinion Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times
Guest Essay Credit... Shehzil Malik By Daron Acemoglu and Simon Johnson Mr. Acemoglu and Mr. Johnson are the authors of “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity.” Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself. In just a few months, Microsoft broke speed records in establishing ChatGPT, a form of generative artificial intelligence that it plans to invest $10 billion into, as a household name. And last month, Sundar Pichai, C.E.O. of Alphabet/Google, unveiled a suite of A.I. tools — including for email, spreadsheets and drafting all manner of text. While there is some discussion as to whether Meta’s recent decision to give away its A.I. computer code will accelerate its progress, the reality is that all competitors to Alphabet and Microsoft remain far behind. The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well. History has repeatedly demonstrated that control over information is central to who has power and what they can do with it. At the beginning of writing in ancient Mesopotamia, most scribes were the sons of elite families, primarily because education was expensive. In medieval Europe, the clergy and nobility were much more likely to be literate than ordinary people, and they used this advantage to reinforce their social standing and legitimacy. Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications. Access to facts about the outside world weakened and ultimately helped to destroy Soviet control over Poland, Hungary, East Germany and the rest of its former sphere of influence. Starting in the 1990s, the internet offered even lower-cost ways to express opinions. But over time the channels of communication concentrated into a few hands including Facebook, whose algorithm exacerbated political polarization and in some well-documented cases also fanned the flames of ethnic hatred. In authoritarian regimes, such as China, the same technologies have turned into tools of totalitarian control. With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer. There is no easy way to access the footnotes or links that let users explore the underlying sources. This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities. Deep Mind, a company now owned by Google, is proud of developing algorithms that can beat human experts at games such as chess and Go. This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth. Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind. Doing away with grocery store clerks in favor of self-checkout kiosks does very little for the productivity of those who remain employed, for example, while also annoying many customers. But it makes it possible to fire workers and tilt the balance of power further in favor of management. We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism natural...
12
views
The New Age of Hiring: AI Is Changing the Game for Job Seekers - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The New Age of Hiring: AI Is Changing the Game for Job Seekers - CNET
When I was growing up, way before artificial intelligence captured the zeitgeist, applying for a job was relatively simple: Print out a fancy resume, dress smart and be ready to interview, in person. Those old rules no longer apply. Over the last two decades, digital technologies have radically transformed the employment landscape. Automated software, colossal professional databases and one-click applications now dominate the hiring and recruitment process. If you've been job hunting recently, chances are you've interacted with a resume robot, a nickname for an Applicant Tracking System, or ATS. In its most basic form, an ATS acts like an online assistant, helping hiring managers write job descriptions, scan resumes and schedule interviews. As artificial intelligence advances, employers are increasingly relying on a combination of predictive analytics, machine learning and complex algorithms to sort through candidates, evaluate their skills and estimate their performance. Today, it's not uncommon for applicants to be rejected by a robot before they're connected with an actual human in human resources. The job market is ripe for the explosion of AI recruitment tools. Hiring managers are coping with deflated HR budgets while confronting growing pools of applicants, a result of both the economic downturn and the post-pandemic expansion of remote work. As automated software makes pivotal decisions about our employment, usually without any oversight, it's posing fundamental questions about privacy, accountability and transparency. For job seekers, AI-powered hiring software is a black box. You might commit to a time-consuming online application only to be ghosted or receive a generic rejection email without feedback. "No one really understands what's happening to them as they navigate the process," says Mitra Ebadolahi, senior project director for economic justice at Upturn, a technology and equity nonprofit. That's disempowering, she adds. Technology, though, is a curse and a blessing, depending on how it's wielded and who's wielding it. An array of online tools — such as resume-boosting software that improves keyword-matching and generative AI platforms that draft cover letters — are helping applicants avoid HR's "no" pile, the point of no return. Plus, with algorithm-based career platforms like LinkedIn, ZipRecruiter and Indeed, there's more access to job postings than ever before. Cyberspace is crowded with ways to adapt to this brave new world. When I ask experts whether automation will completely take over hiring, most say recruitment is a human-driven process. Tailoring your application for an ATS just helps you get a foot in the door, says Ankur Chaudhari, product lead for Jobscan, an online tool that optimizes resumes. Chaudhari compares the process to an entrance exam, like the GMAT. Even if you score high, you'll still need to compete with other students for a top-ranking business school. If you score low, you'll never have the chance to show how qualified you really are. Job seekers will always be the underdogs in the hiring process, with or without AI. By knowing the rules of the game, you won't change that fact, but you could get a leg up. Kind regards, robot Lauren Milligan, an Illinois-based career coach and resume writer, works with clients who've been out of the job market for some time. Disenchanted by the idea of being evaluated by AI, they enlist her business, ResuMayday, for help. For large-volume hiring, the majority of resumes go through a computer software program called an Applicant Tracking System. An ATS scans, collects and sorts resumes, allowing hiring managers to screen candidates and track their progress quickly. Jobscan "Job seekers are behind the eight ball in every stretch," Milligan says. That's because of an unfamiliar, and frankly impersonal, application process. A machine screens the majority of resumes that travel from an IP address to an employer's database. For larger corporations handling thousands of resumes, automation can relieve burdensome administrative tasks and increase efficiency while cutting costs. Nearly 99% of Fortune 500 companies filter candidates through a major ATS such as Workday, Taleo, Jobvite, Greenhouse or Lever. Automated tools might be used during multiple stages in the...
48
views
How AI art killed an indie book cover contest - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How AI art killed an indie book cover contest - The Verge
The book cover for M.V. Prindle’s Bob the Wizard shows a coiffed man in sunglasses, smoke dancing from his mouth as a gray, ominous sky swirls behind him. A small fairy-like creature flutters nearby, and the folds and shadows of Bob’s jacket and beard fade into one while a bright green key hangs around his neck. The book tells the story of a “shotgun-wielding ex-garbage man” on the hunt for his family’s killer, the chase winding through a mystical world. Bob the Wizard’s cover was a hit. In May, it won the Self-Published Fantasy Blog-Off (SPFBO) cover contest, an annual competition run by author Mark Lawrence that highlights indie authors in the fantasy genre. But the victory didn’t last long. The same day the winner was announced, readers and fans on Twitter were questioning whether the art was created at least in part using AI tools. The incident highlighted a growing crisis of trust in science fiction and fantasy publishing: in a world where AI-generated media is common, do you know the work you’re looking at was made by a human? The SPFBO’s cover contest explicitly outlawed using AI tools, and the winning artist, Sean Mauss, initially insisted that he had made the art himself. He even shared a trove of documents and Photoshop files that he said proved the finished product was his own. Readers found the evidence was unconvincing. Using a Photoshop layer in files the artist had shared, Twitter users scoured the archives of Midjourney, a generative AI system, and found images that matched elements in the Bob the Wizard cover. The username that created the images was even spotted in a file name. The striking cover art, it seemed, was simply a collage of Midjourney outputs. Within a day, Mauss had withdrawn the submission, deactivated several social media accounts, and apparently taken a personal website offline. (An email sent to an address on an archived version of the site wasn’t returned.) Prindle, the book’s author, said on Twitter that he was misled and has since hired a new artist to do the cover. “I’ve woken up to compelling evidence that the cover was at least partly AI generated, breaking the rules of the contest,” Lawrence wrote on his blog. “So, in addition to having been withdrawn, it’s now also disqualified under the existing rules.” “I think it needs to be a separate contest, [organized] by someone with the necessary expertise and the appetite for controversy.” But Lawrence went further than disqualifying Mauss’ entry. In the same blog post, he abandoned the idea of holding a cover contest in the future, saying there wouldn’t be a competition going forward. In Tweets, Lawrence makes clear he’s uninterested in litigating future debates about whether art is human or machine-made. “I think it needs to be a separate contest, [organized] by someone with the necessary expertise and the appetite for controversy,” he wrote in response to someone suggesting a way forward. “That’s not me.” (Lawrence didn’t respond to requests for comment.) The cover contest saga comes at a time when the fantasy and science fiction community is wrestling with what role, if any, generative AI tools have in the industry. Earlier this year, prominent magazines like Clarkesworld and Asimov’s Science Fiction said they were experiencing a deluge of low-quality AI-generated short stories, overwhelming their publications and, at times, even forcing outlets to temporarily close submissions. Though editors said they could spot the works almost immediately, sifting through the influx was a time-suck, forcing publishers to wade through a new kind of spam coming from people outside the industry. Now, the community of writers, artists, and readers is confronted with a new reality: AI-aided work that —at least at first —can pass for a human’s output. Soon after the cover contest controversy began, other authors started to suspect they’d unwittingly paid for AI-generated work by Mauss. Michael R. Fletcher and Clayton W. Snyder had both been impressed by the Bob the Wizard cover, and they’d commissioned Mauss to produce art for two books back in April. “One of the things we specified [with Mauss] right off the bat was that none of this art be AI-generated in the first place. We wanted an actual working artist to do the art,” Snyder says. But the fiasco — and Mauss’ disappeara...
87
views
Microsoft to move top AI experts from China to new lab in Canada - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Microsoft to move top AI experts from China to new lab in Canada - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
2
views
'Hold off from having kids,' warns AI expert Mo Gawdat - Euronews
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
'Hold off from having kids,' warns AI expert Mo Gawdat - Euronews
By Sarah Palmer • Updated:08/06/2023 -09:24 The ex-chief business officer at Google X has issued a terrifying warning about the dangers of AI in a podcast interview. Mo Gawdat, artificial intelligence (AI) expert and ex-chief business officer at Google X, has warned people who don’t already have children should hold off as the rapid ascent of AI continues. "The risks are so bad, in fact, that when considering all the other threats to humanity, you should hold off from having kids if you are yet to become a parent," he told podcast host Steven Bartlett on the Diary of a CEO podcast. It’s not the first time tech industry executives have issued such a warning. Earlier this year, key figures including Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking developers to hold off on further innovations for six months so the industry and end-users have time to process the latest advances. The Centre for AI Safety also issued a statement that says: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Sam Altman, CEO of OpenAI, the creator of the wildly popular chatbot ChatGPT, has warned of"existential risk". 'Biggest challenge humanity has ever faced' Speaking on the matter, Gawdat went so far as to compare our future reality to popular dystopian films like Blade Runner. "There has never been such a perfect storm in the history of humanity," Gawdat said. "Economic, geopolitical, global warming, climate change, the whole idea of AI, this is a perfect storm, the depth of uncertainty…it has never been more intense. If you really loved your kids would you really want to expose them to all this?" The interview comes after Bartlett appointed Gawdat as chief AI officer at his marketing agency, Flight Story. "I have spent my career fascinated by the role that technology plays, and now the biggest challenge humanity has ever faced is upon us," Gawdat said. "Artificial intelligence is the culmination of technological advancement and it is my view that it will be unprecedented in defining the way the world is shaped". "The sophistication of digital intelligence is such that it has become autonomous and is something that needs to be appealed to, rather than controlled," he added. "It’s vital we stay attuned to how to do this, or risk being left behind".
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
109
views
1
comment
Announcing Microsoft's AI Customer Commitments - The Official Microsoft Blog - Microsoft
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Announcing Microsoft's AI Customer Commitments - The Official Microsoft Blog - Microsoft
AI is creating unparalleled opportunities for businesses of every size and across every industry. We are seeing our customers embrace AI services to drive innovation, increase productivity and solve critical problems for humanity, such as the development of breakthrough medical cures and new ways to meet the challenges of climate change.
At the same time, there are legitimate concerns about the power of the technology and the potential for it to be used to cause harm rather than benefits. It’s not surprising, in this context, that governments around the world are looking at how existing laws and regulations can be applied to AI and are considering what new legal frameworks may be needed. Ensuring the right guardrails for the responsible use of AI will not be limited to technology companies and governments: every organization that creates or uses AI systems will need to develop and implement its own governance systems. That’s why today we are announcing three AI Customer Commitments to assist our customers on their responsible AI journey. First, we will share what we are learning about developing and deploying AI responsibly and assist you in learning how to do the same. Microsoft has been on a responsible AI journey since 2017, harnessing the skills of nearly 350 engineers, lawyers and policy experts dedicated to implementing a robust governance process that guides the design, development and deployment of AI in safe, secure and transparent ways. More specifically we are: Sharing expertise: We are committed to sharing this knowledge and expertise with you by publishing the key documents we developed during this process so that you can learn from our experiences. These include our Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on the implementation of our responsible AI by design approach. Providing training curriculum: We will also share the work we are doing to build a practice and culture of responsible AI at Microsoft, including key parts of the curriculum that we use to train Microsoft employees. Creating dedicated resources: We will invest in dedicated resources and expertise in regions around the world to respond to your questions about deploying and using AI responsibly. Second, we are creating an AI Assurance Program to help you ensure that the AI applications you deploy on our platforms meet the legal and regulatory requirements for responsible AI. This program will include the following elements: Regulator engagement support: We have extensive experience helping customers in the public sector and highly regulated industries manage the spectrum of regulatory issues that arise when dealing with the use of information technology. For example, in the global financial services industry, we worked closely for a number of years with both customers and regulators to ensure that this industry could pursue digital transformation on the cloud while complying with its regulatory obligations. One learning from this experience has been the industry’s requirement that financial institutions verify customer identities, establish risk profiles and monitor transactions to help detect suspicious activity, the “know your customer” requirements. We believe that this approach can apply to AI in what we are calling “KY3C,” an approach that creates certain obligations to know one’s cloud, one’s customers and one’s content. We want to work with you to apply KY3C as part of our AI Assurance Program. Risk framework implementation: We will attest to how we are implementing the AI Risk Management Framework recently published by the U.S. National Institute of Standards and Technology (NIST) and will share our experience engaging with NIST’s important ongoing work in this area. Customer councils: We will bring customers together in customer councils to hear their views on how we can deliver the most relevant and compliant AI technology and tools. Regulatory advocacy: Finally, we’ll play an active role in engaging with governments to promote effective and interoperable AI regulation. The recently launched Microsoft blueprint for AI governance presents our proposals to governments and other stakeholders for appropriate regulatory frameworks for AI. We have made avail...
33
views
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images - CNN
Will 2024 be America's A.I. election? 01:32 - Source: CNN New York CNN — For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities. But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.” McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI. Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate. “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.” Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted. Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.” A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not. But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.” “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.” “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.” Companies are broadly taking two approaches to address the issue. One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature. Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data. Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerato...
55
views
Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times
In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray. Steven A. Schwartz told a judge considering sanctions that the episode had been “deeply embarrassing.” Credit... Jefferson Siegel for The New York Times June 8, 2023 Updated 5:50 p.m. ET As the court hearing in Manhattan began, the lawyer, Steven A. Schwartz, appeared nervously upbeat, grinning while talking with his legal team. Nearly two hours later, Mr. Schwartz sat slumped, his shoulders drooping and his head rising barely above the back of his chair. For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT. The judge, P. Kevin Castel, said he would now consider whether to impose sanctions on Mr. Schwartz and his partner, Peter LoDuca, whose name was on the brief. At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him. “God, I wish I did that, and I didn’t do it,” Mr. Schwartz said, adding that he felt embarrassed, humiliated and deeply remorseful. “I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel. In contrast to Mr. Schwartz’s contrite postures, Judge Castel gesticulated often in exasperation, his voice rising as he asked pointed questions. Repeatedly, the judge lifted both arms in the air, palms up, while asking Mr. Schwartz why he did not better check his work. As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens. “I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said. An onlooker let out a soft, descending whistle. The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges. “This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.” The case involved a man named Roberto Mata, who had sued the airline Avianca claiming he was injured when a metal serving cart struck his knee during an August 2019 flight from El Salvador to New York. Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed. After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions. It turned out the cases were not real. Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally. He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases. “I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said. Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet. Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critic...
171
views
AI revives moribund software stocks - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI revives moribund software stocks - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
1
view
Why Yext Stock Rocketed 44% Today - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Why Yext Stock Rocketed 44% Today - The Motley Fool
What happened
Shares of cloud-based information company Yext (YEXT 38.44%) are rocking and rolling Wednesday morning, up a shocking 44.2% through 11:30 a.m. ET after the company beat analyst expectations in its first-quarter earnings report last night.
Heading into the quarter, analysts had forecast that Yext would have earnings per share (EPS) of only $0.05, adjusted for one-time items, on sales of $98.6 million. Instead, it had EPS of $0.09, and sales came in at $99.5 million. So what
So Yext beat on earnings. The real question for investors today, though, is whether the beat was by a big enough margin to justify the stock's sharp run-up in price. And that is very much up for debate.
While sales exceeded estimates, they still grew only 1% year over year, despite Yext growing its customer base by 5%. And although the company had a bigger-than-expected adjusted profit, earnings as calculated according to generally accepted accounting principles (GAAP) were just breakeven. (And the total net loss was about $400,000.)
Now what
Investors might have been less excited by the first-quarter results, and more by what it said about the future. Calling the quarter "a strong start to the year," CEO Michael Walrath forecast sequential revenue growth to roughly $102 million.
And Walrath made sure to mention the magic words "artificial intelligence" (AI), to gin up investor excitement. He said Yext "is ideally positioned to help enterprises use generative AI, search, content management."
That $102 million projection would be more than the $100.1 million that Wall Street has forecast for this quarter, and Yext thinks EPS could reach $0.06 or $0.07 this quarter, versus the $0.05 the Street is expecting.
That being said, 2.5% growth isn't a lot, and $0.06 or $0.07 per share would be a sequential decline in adjusted earnings. (Yext made no promises about earning a GAAP profit.) For the time being, however, investors seem willing to overlook these quibbles.
Yext beat earnings. Yext raised guidance. And Yext called itself an AI company. That seems good enough for now. Rich Smith has no position in any of the stocks mentioned. The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
37
views
Microsoft has no shame: Bing spit on my 'Chrome' search with a fake AI answer - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Microsoft has no shame: Bing spit on my 'Chrome' search with a fake AI answer - The Verge
It was time to download Google Chrome on a new Windows 11 computer. I typed “Chrome” into the Microsoft Edge search bar. I was greeted with a full-screen Microsoft Bing AI chatbot window, which promptly told me it was searching for... Bing features. Not Chrome, the thing I’d asked for. Search query: “Chrome.” Search result: “news articles about Bing features.” Screenshot by Sean Hollister / The Verge I picked my jaw up off the floor and tried again. Same result every time. Same exact text, too. This is clearly not Microsoft’s GPT-4 powered chatbot at work — it’s a completely canned interaction. Here’s how much of my screen it took up, and what it looks like zoomed in: Every search result link is pushed entirely off my screen by this canned ad copy. Screenshot by Sean Hollister / The Verge This supposed AI response even has a headline: “Bing: The Search Engine That Does More Than Just Search.” Screenshot by Sean Hollister / The Verge I get it to work on a different computer. Across the country, a colleague tells me he saw the exact same thing setting up his wife’s gaming laptop. Across the ocean, another colleague pulls it up on his mobile phone. It’s not universal, but it’s absolutely not a tiny experiment in a single region, either. Maybe this doesn’t seem like a big deal to you. I’m using Microsoft’s search engine in Microsoft’s browser on Microsoft’s operating system, after all — why should Microsoft willingly link me to a competitor? Let me put things a different way: Microsoft just gave itself a full-screen ad in search results by faking an AI interaction. This “search result” is juicing Microsoft’s own product instead of respecting its users’ intent. Yes, Microsoft has previously plugged Edge when you search for Chrome — but not like this. Let’s compare: Image slider: drag to see before and after. Even if you don’t agree with me that Microsoft is yet again shoving its Edge where it doesn’t belong, this kind of move makes a mockery of the company’s AI ambitions. Microsoft CEO Satya Nadella claims he wants Edge to genuinely compete. “Let’s build first a product that is competitive in the marketplace that’s actually serving user needs,” he told us in a February interview, when my editor-in-chief Nilay Patel asked whether the Bing AI browser integration was partially an attempt to “capture marketshare from Chrome”. “It’s not just a search engine; it’s an answer engine,” claimed Nadella earlier in the show, “because we’ve always had answers, but with these large models, the fidelity of the answers just gets so much better.” Would you call replacing a “Chrome” search with a juiced “news articles about Bing features” search as “better”? I know where I land on that. But it’s important to both Microsoft and Google that their answers are seen as “better,” because they’re pushing aside the ten blue links that have dominated search for so long. We recently worried out loud whether Google’s new Search Generative Experience would prioritize ads over actual answers, but it looks like we won’t have to wait to see how brazen these companies can get. Unless there’s strong pushback, I would expect the ads to win whenever it’s profitable or convenient. When asked for comment, a spokeperson forwarded this generic statement from Microsoft product marketing director Jason Fischel: We often experiment with new features, UX, and behaviors to test, learn, and improve experiences for our customers. These tests are often brief and do not necessarily represent what is ultimately or broadly provided to customers. Shortly after we published this story with that comment, Fischel confirmed Microsoft has pulled the plug on this particular idea. “The experience is no longer flighting.” Sure enough, I no longer see it. Some open questions: Did this represent what Microsoft wants to provide to customers? Would it have just been an “experiment” if I hadn’t put Microsoft on blast? And given we personally saw this on the other side of the country and the other side of an ocean, what is the company’s definition of “broadly?” I asked Microsoft a few such questions, and I’ll update you if we receive answers. As we keep saying every time Microsoft pulls this kind of shit, it’s a shame because Edge is actually good. I was just beginning to try Microsoft’s br...
86
views
The AI Boom Is Pulling Tech Entrepreneurs Back to San Francisco - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The AI Boom Is Pulling Tech Entrepreneurs Back to San Francisco - The New York Times
Doug Fulop’s and Jessie Fischer’s lives in Bend, Ore., were idyllic. The couple moved there last year, working remotely in a 2,400-square-foot house surrounded by trees, with easy access to skiing, mountain biking and breweries. It was an upgrade from their former apartments in San Francisco, where a stranger once entered Mr. Fulop’s home after his lock didn’t properly latch. But the pair of tech entrepreneurs are now on their way back to the Bay Area, driven by a key development: the artificial intelligence boom. Mr. Fulop and Ms. Fischer are both starting companies that use A.I. technology and are looking for co-founders. They tried to make it work in Bend, but after too many eight-hour drives to San Francisco for hackathons, networking events and meetings, they decided to move back when their lease ends in August. “The A.I. boom has brought the energy back into the Bay that was lost during Covid,” said Mr. Fulop, 34. The couple are part of a growing group of boomerang entrepreneurs who see opportunity in San Francisco’s predicted demise. The tech industry is more than a year into its worst slump in a decade, with layoffs and a glut of empty offices. The pandemic also spurred a wave of migration to places with lower taxes, fewer Covid restrictions, safer streets and more space. And tech workers have been among the most vocal groups to criticize the city for its worsening problems with drugs, housing and crime. But such busts are almost always followed by another boom. And with the latest wave of A.I. technology — known as generative A.I., which produces text, images and video in response to prompts — there’s too much at stake to miss out. Investors have already announced $10.7 billion in funding for generative A.I. start-ups within the first three months of this year, a thirteenfold increase from a year earlier, according to PitchBook, which tracks start-ups. Tens of thousands of tech workers recently laid off by big tech companies are now eager to join the next big thing. On top of that, much of the A.I. technology is open source, meaning companies share their work and allow anyone to build on it, which encourages a sense of community. “Hacker houses,” where people create start-ups, are springing up in San Francisco’s Hayes Valley neighborhood, known as “Cerebral Valley” because it is the center of the A.I. scene. And every night someone is hosting a hackathon, meet-up or demo focused on the technology. In March, days after the prominent start-up OpenAI unveiled a new version of its A.I. technology, an “emergency hackathon” organized by a pair of entrepreneurs drew 200 participants, with almost as many on the waiting list. That same month, a networking event hastily organized over Twitter by Clement Delangue, the chief executive of the A.I. start-up Hugging Face, attracted more than 5,000 people and two alpacas to San Francisco’s Exploratorium museum, earning it the nickname “Woodstock of A.I.” Image More than 5,000 people attended the so-called Woodstock of A.I. in San Francisco in March. Credit... Alexy Khrabrov Madisen Taylor, who runs operations for Hugging Face and organized the event alongside Mr. Delangue, said its communal vibe had mirrored that of Woodstock. “Peace, love, building cool A.I.,” she said. Taken together, the activity is enough to draw back people like Ms. Fischer, who is starting a company that uses A.I. in the hospitality industry. She and Mr. Fulop got involved in the 350-person tech scene in Bend, but they missed the inspiration, hustle and connections in San Francisco. “There’s just nowhere else like the Bay,” Ms. Fischer, 32, said. Jen Yip, who has been organizing events for tech workers over the past six years, said that what had been a quiet San Francisco tech scene during the pandemic began changing last year in tandem with the A.I. boom. At nightly hackathons and demo days, she watched people meet their co-founders, secure investments, win over customers and network with potential hires. “I’ve seen people come to an event with an idea they want to test and pitch it to 30 different people in the course of one night,” she said. Ms. Yip, 42, runs a secret group of 800 people focused on A.I. and robotics called Society of Artificers. Its monthly events have become a hot ticket, often selling ou...
83
views
Stop the fearmongering on AI, urges Microsoft - The Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Stop the fearmongering on AI, urges Microsoft - The Times
AI experts who warned about the technology wiping out humanity should dial down their warnings, the digital minister and Microsoft’s president have said. Brad Smith, the vice-chairman and president of Microsoft, called on those in the “fear parade” to “ratchet down the rhetoric”. Paul Scully, the tech and digital minister, said that a doom-laden narrative did not lend itself to thoughtful policymaking. Both made their pleas after a series of alarming warnings about AI. Matt Clifford predicted that in two years’ time, AI systems would be powerful enough to “kill many humans” JEFF OVERS/BBC Matt Clifford, an adviser to the prime minister, said on Monday that AI systems would be powerful enough to “kill many humans” in two years. Last week Sam Altman, the boss of OpenAI, which developed ChatGPT, signed a statement along with hundreds of leading voices in the field calling for a
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
3
views
Why AI Will Save the World - Andreessen Horowitz
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Why AI Will Save the World - Andreessen Horowitz
The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it. First, a short description of what AI is : The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology. A shorter description of what AI isn’t : Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies . An even shorter description of what AI could be : A way to make everything we care about better. Why AI Can Make Everything We Care About Better The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction. Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years. What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here. AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it . In our new era of AI: Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love. Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes. Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds. Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all. Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet. Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit. The creative arts will enter a golden age, as AI-augmented artist...
39
views
For better or worse, Apple is avoiding the AI hype train - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
For better or worse, Apple is avoiding the AI hype train - The Verge
Five minutes into Google’s I/O conference in May, Verge staffers started taking bets on how many times “AI” would be mentioned onstage. It seemed like every presenter had to say it at least once or get stuck with a cattle prod by Sundar Pichai. (In the end, we stopped betting and made a supercut.) Watching WWDC, though, the book ran in the opposite direction: would anyone from Apple mention “AI” at all? It turns out, no, not even once. The technology was referred to, of course, but always in the form of “machine learning” — a more sedate and technically accurate description. As many working in the field itself will tell you, “artificial intelligence” is a much-hated term: both imprecise and overdetermined, more reminiscent of sci-fi mythologies than real, tangible tech. Writer Ted Chiang put it well in a recent interview: what is artificial intelligence? “A poor choice of words in 1954.” Apple prefers to focus on the functionality AI provides Apple’s AI allergy is not new. The company has long been institutionally wary of “AI” as a force of techno-magical potency. Instead, its preference is to stress the functionality of machine learning, highlighting the benefits it offers users like the customer-pleasing company it is. As Tim Cook put it in an interview with Good Morning America today, “We do integrate it into our products [but] people don’t necessarily think about it as AI.” And what does this look like? Well, here are a few of the machine learning-powered features mentioned at this year’s WWDC, spread across Apple’s ecosystem: Better autocorrect in iOS 17 “powered by on-device machine learning”; A Personalized Volume feature for AirPods that “uses machine learning to understand environmental conditions and listening preferences”; An improved Smart Stack on watchOS that “uses machine learning to show you relevant information right when you need it”; A new iPad lock screen that animates live photos using “machine learning models to synthesize additional frames”; “Intelligently curated” prompts in the new Journal app using “on-device machine learning”; And 3D avatars for video calls on the Vision Pro generated using “advanced ML techniques” One of the most ambitious use cases for AI at WWDC was the creation of new 3D avatars for use in Apple’s Vision Pro headsets. GIF: Apple Apart from the 3D avatars, these are all fairly rote: welcome but far from world-changing features. In fact, when placed next to the huge swing for the fences that is the launch of the Vision Pro, the strategy looks not only conservative but also timid and perhaps even unwise. Given recent advances in AI, the question has to be asked: is Apple missing out? The answer to this is “a little bit yes and a little bit no.” But it’s helpful to first compare the company’s approach with that of its nearest tech rivals: Google, Microsoft, and Meta. Of this trio, Meta is the most subdued. It’s certainly working on AI tools (like Mark Zuckerberg’s mysterious “personas” and AI-powered advertising) and is happy to publicize its often industry-leading research, but a big push into the metaverse has left less space for AI. By contrast, Google and Microsoft have gone all in. At I/O, Google announced a whole family of AI language models along with new assistant features in Docs and Gmail and experiments like an AI notebook. At the same time, Microsoft has been rapidly overhauling its search engine Bing, stuffing AI into every corner of Office, and reinventing its failed digital assistant Cortana as the new AI-powered Copilot. These are companies seizing the AI moment, squeezing it hard, and hoping for lots of money to fall out. So should Apple do the same? Could it? Well, I’d argue it doesn’t need to — or at least, not to the same degree as its rivals. Apple is a company built on hardware, on the iPhone and its ecosystem in particular. There’s no pressure for it to reinvent search like Google or improve its productivity software like Microsoft. All it needs to do is keep selling phones, and it does that by making iOS as intuitive and welcoming as possible. (Until, of course, there’s a new hardware platform to dominate, which may or may not be emerging with the Vision Pro.) There’s only one area, I think, where Apple is missing out by not embracing AI. That’s Siri. The company’s...
43
views
The AI Mona Lisa Explains Everything - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The AI Mona Lisa Explains Everything - The Atlantic
Depending on how you look at it, generative AI is either astonishingly powerful or totally pointless. Ben Kothe / The Atlantic. Source: Getty. The Mona Lisa is small. Less than three feet tall and about two feet wide, it hangs tiny in the biggest exhibition room at France’s Louvre Museum. And in the past two or so weeks, some vigilante AI artists have decided that it should be bigger—much bigger. They’re making that happen using a beta tool in Adobe Photoshop called “generative fill.” It launched late last month and allows users to fill in, augment, or expand an image using AI—think ChatGPT but for Photoshop. (It uses Adobe’s “Firefly” AI models, which are trained on its stock photography.) Amateur and professional editors alike can use a text prompt to, say, add clouds to a picture of a blue sky, or widen a photo of a beach to include additional, computer-rendered beach. In a new, enlarged version of Leonardo da Vinci’s portrait created with the tool, the painting’s subject takes up just a small part of the canvas. She is there, familiar as ever, except she’s surrounded by a brooding landscape. And that’s about it. The bottom half of her body is still missing. Another post takes Vincent Van Gogh’s The Bedroom and grows it into a bigger bedroom. Perhaps the most outrageous of the bunch builds on Piet Mondrian’s Composition With Red Blue and Yellow, surrounding the famously minimalist work with additional rectangles of varying sizes. Others used generative fill to widen classic album covers or film shots. People got very angry about these expansions. They pointed out that the generated images miss an important point: Artists compose and constrain their works intentionally. Da Vinci painted a portrait not because he was incapable of painting a landscape, but because he chose to paint a portrait. The revised works, they complained, weren’t even good! If one were to go about expanding the Mona Lisa, one could at the very least have the decency to give her some legs. Read: Welcome to a world without endings But the AI Mona Lisa is the perfect metaphor for where we are with generative AI. We can quickly and easily do things that once took a lot of time and skill. Reimagining the Mona Lisa from a wider perspective has been possible ever since there was a Mona Lisa; it just would have required actual craftsmanship, paint, a canvas, and so on. Now a computer can do it for you in mere seconds. But why? Was there something wrong with the original Mona Lisa? Even if you’re using the tools in earnest, there’s a good chance their output will be derivative or dull, because generative AI is fundamentally about remixing rather than creating something entirely new. Most of the use cases for generative AI being sold to us right now are like this. We are told that this AI will completely change the world as we know it—Bill Gates and other technologists are claiming that it is as revolutionary as the invention of the internet. “AI is the tech the world has always wanted,” OpenAI CEO Sam Altman tweeted last month. And then we are offered applications that fall well short of world-changing. Bing is integrating AI into its search functionality so that users can … well, what exactly? Find answers in a different way? Meanwhile people are already losing their job to chatbots. AI enthusiasts will breathlessly tell you about how ChatGPT can draft work emails or render PowerPoint presentations in seconds. But to what end? People are right to wonder if we really need more emails, just like they’re right to wonder if we really need a bigger Mona Lisa. All of this computational firepower is being directed at uses that seem more like corporate gimmicks than anything substantive. Read: Here’s how AI will come for your job Which isn’t to say that applications of AI won’t someday be world-altering, or that we won’t be able to harness its power in ways that move us. It’s just that AI hype currently outpaces its abilities. Contrast the viral Mona Lisa tweet with the other big AI story last week: an open letter signed by hundreds of experts warning that, unchecked, artificial intelligence could pose an extinction-level threat on par with nuclear war. Together, these stories offer a perfect synopsis of the moment: AI is going to either kill us, or bore us with endless riffs on Edward Hopper. If th...
66
views
Zoom can now give you AI summaries of the meetings you've missed - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Zoom can now give you AI summaries of the meetings you've missed - The Verge
Zoom now lets users use AI to catch up on missed meetings. The feature, which Zoom first announced in March, has finally arrived as a trial for users in “select plans,” according to a post on Zoom’s website. With Zoom IQ — the app’s AI-powered assistant — hosts can now generate summaries of meetings and send them to users through Zoom Team Chat or email, all without actually recording the meetings. It’s hard to tell how accurate (or detailed) the meeting summaries are without trying them out for ourselves, but it still seems like a much quicker way to get a recap on anything you’ve missed, as opposed to watching an entire prerecorded meeting. In addition to AI-generated meeting summaries, Zoom is launching the ability to compose messages in Team Chat using AI. The feature leverages OpenAI’s technology to create messages “based on the context of a Team Chat thread” and also lets you customize the tone or length of a message before you send it. Image: Zoom All of these features build upon what Zoom’s IQ assistant already offers, such as the ability to create meeting highlights and chapters. In the near future, Zoom plans on rolling out several other AI-powered features through its partnership with OpenAI and Anthropic. That includes the ability to write emails with AI using context from previous meetings, phone calls, and emails as well as a way to summarize threads in Zoom Team Chat “with the click of a button.” Zoom is also working on a way for you to use AI to “discreetly” obtain an in-chat summary of a meeting when you arrive late, create whiteboard drafts with text prompts, and automatically organize ideas into categories during brainstorming sessions. According to Zoom, the company “collects data from users’ interactions with the Zoom IQ features, including inputs, messages, and AI-generated content” and could use this information to train Zoom IQ AI models (but not third-party ones) unless you choose not to share data with Zoom. Alongside Zoom, other productivity platforms, including Salesforce’s Slack and Microsoft 365, have begun incorporating AI features as well. Slack, for example, lets you reply to colleagues with ChatGPT and could soon have AI attend Huddles on your behalf, while Microsoft has rolled out an AI Copilot for its 365 apps. For now, though, only Zoom IQ’s meeting summaries and chat compose features are available as a free trial “for a limited time” to subscribers of Zoom One (Enterprise Plus, Enterprise, Business Plus, Business, Pro) and some Zoom legacy bundles (Enterprise Named Host, Enterprise Active Host, Zoom Meetings Enterprise, Zoom Meetings Business, Zoom Meetings Pro). It’s unclear how much these features will cost after the free trial, however, but Zoom spokesperson Lacretia Taylor tells The Verge that the company will reveal pricing information “in the coming months.”
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
19
views
Apocalyptic panic and AI doomerism need to give way to analysis of real risks - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Apocalyptic panic and AI doomerism need to give way to analysis of real risks - VentureBeat
June 5, 2023 10:07 AM Image Credit: Created with Midjourney Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More The rapid advance of generative AI marks one of the most promising technological advancements of the past century. It has evoked excitement and, like nearly all other technological breakthroughs of the past, fear. It is promising to see Congress and Vice President Kamala Harris, among others, taking the issue so seriously. At the same time, much of the discourse on AI has been tilting further towards fear-mongering, detached from the reality of the technology. Many favor narratives that latch on to familiar science fiction narratives of doom and destruction. The anxiety around this technology is understandable, but apocalyptic panic needs to give way to a thoughtful and rational conversation about what the real risks are and how we can mitigate them. So what are the risks of AI? First, there are fears that AI could make it easier to impersonate people online and create content that makes it hard to differentiate between real and false information. These are legitimate concerns, but they are also incremental challenges to existing problems. We, unfortunately, already have a wealth of misinformation online. Deep fakes and edited media exist in abundance, and phishing emails started decades ago. Similarly, we know the impact that algorithms can have on information bubbles, amplifying misinformation and even racism. AI could make these problems more challenging, but it hardly created them, and AI is simultaneously being used to mitigate them. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now The second bucket is the more fanciful realm: That AI could amass super-human intelligence and potentially overtake society. These are the kind of worst-case scenarios that have been imbued in society’s imagination for decades if not centuries. We can and should consider all theoretical scenarios, but the notion that humans will accidentally create a malevolent, omnipotent AI strains credulity and feels to me like AI’s version of the claim that the large hadron collider at CERN might open a black hole and consume the earth. Technology always wants to develop One proposed solution, slowing technological development, is a crude and clumsy response to the rise of AI. Technology always continues to develop. It’s a matter of who develops it and how they deploy it. Hysterical responses ignore the real opportunity for this technology to benefit society profoundly. For example, it is enabling the most promising advances in healthcare that we’ve seen in over a century, and recent work suggests that the productivity increase to knowledge workers could match or exceed history’s greatest leaps in productivity. Investment in this technology will save countless lives, create extraordinary economic productivity and enable a new generation of products to come to life. The nation that limits its citizens and organizations from accessing advanced AI would be the equivalent of denying its citizenry access to the steam engine, the computer or the internet. Delaying the development of this technology will mean millions of excess deaths, a major stall to relative national productivity and economic growth, and the ceding of economic opportunity to the nations that do enable the technology’s advance. Responsible, thoughtful development Moreover, democratic nations encumbering the development of advanced AI offer autocratic regimes the opportunity to catch up and reap the economic, medical and technological benefits earlier. Democratic nations must be the first to advance this technology and must do so in concert with the teams best equipped to deliver the technology, not in opposition to them. At the same time, just as it would be a mistake to try to deny technological advancements, it would be equally foolish to allow it to develop without a responsible framework. There have been some productive first steps towards this, notably The White House’s AI Bill of Rights, Britain’s “pro-innovation approach,” and Ca...
54
views
Chegg Embraced AI. ChatGPT Ate Its Lunch Anyway - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Chegg Embraced AI. ChatGPT Ate Its Lunch Anyway - WIRED
In subjects such as engineering, chemistry, and statistics, which drive significant traffic to Chegg but often involve diagrams, there was a sense that relying too heavily on AI to parse visual information was unreasonable, the former employees say. So the ethics of unleashing an imperfect product gave Chegg pause. “We knew generative was coming down the pike,” says one former executive. “Text analysis was easy to embrace in the short run.” In 2020,OpenAI’s GPT-3 model was released andmade text generation much better. Some machine-learning leaders at Chegg wanted to get their hands on it, but one source says executives weren’t aggressive about securing access to the technology, which OpenAI did not open-source. Early this year,GPT-3’s successor was added to ChatGPT, and the centrality of generative AI to Chegg’s future became inarguable, carved as it was into the company's dented user growth. Fight Back Chegg is now focused on proving with its in-house bot CheggMate that it’s possible to outcompete ChatGPT when it charges onto your turf. “We happened to be one of the industries that's facing it first, and that gives us a wonderful opportunity to understand it deeper and sooner and come on to the other side of it with unique and value-creating products for our consumers,” says Schultz, the COO. The company has marshaled all extra hands onto CheggMate and AI development, including by reassigning teams that worked on collecting more data from users to personalize services through more traditional means. Brown, the CFO, told investors last month that the company’s summer interns will be fully focused on CheggMate. But Chegg doesn’t have the best record of developing products from scratch and has previously leaned on acquisitions, leaving some former executives closely following CheggMate unsure of its prospects. The new service also doesn’t exactly ease ethical implications. Chegg has long faced allegations from colleges and universities that it enables cheating, as students secretly turn to its tools to complete homework and exams. Officially, Cheggbars dishonest use and carries out and supports integrity investigations, says Nina Huntemann, the company’s chief academic officer. But former Chegg data scientist Eric Wang worries that CheggMate and similar applications could spread the cheating habit. Students feel overwhelmed and pressed for time, and feel they are competing for scarce opportunities, he says. “All of these forces drive students who know better to make decisions that are in hindsight not great,” Wang says, suggesting that there could be better ways to support students and educators. Select users, along with Chegg’s subject matter experts and academic advisers, began testing CheggMate over the past couple of weeks, but it isn’t expected to publicly launch until next year. That means it won’t be ready for the US fall semester, when Chegg typically generates its greatest sales. Schultz says he’s proud of the company’s response to ChatGPT’s arrival. “We weren't going to react overnight and just throw something up on the site,” he says. “We have a responsibility to be thoughtful.” When a user types a query to CheggMate, it first attempts to categorize whether the request is for help understanding a concept, solving a particular problem, or concerning a particular subject, Schultz says. The system then tries to direct the question to the best resource, with the options including prompting GPT-4, having a human expert answer, or re-airing an old answer from Chegg’s database. CheggMate is designed to keep users engaged through positive reinforcement and pushing related content. “We could say, ‘Why don't you try this similar problem? Why don't you guess a step?’” says Huntemann, the chief academic officer. “Conversation allows us to extend the experience.” Chegg executives hope tuning their chatbot to education that way will make ChatGPT look less attractive as a homework helper. Pricing for CheggMate has not been determined; operating generative models is expensive, and those costs rise with usage. But two former employees say that having a human expert answer a question costs about $2. Generating a comparable response through GPT-4 possibly runs half a US cent, and having an expert edit it might cost $1 overall, they say, suggesting the economi...
5
views