'Weird Al' Responded To AI-Music's Grammy Eligilbility - UPROXX
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
'Weird Al' Responded To AI-Music's Grammy Eligilbility - UPROXX
In the past, the Grammys have received pushback for not keeping up with music’s growing genres. The coveted annual award ceremony has been looking to be more inclusive, announcing new categories, adjustments, and expanding eligibility requirements. But they made one decision that makes them technically less inclusive: Music generated solely by AI is excluded from being nominated. That prompted a joke from the one and only ‘Weird Al’ Yankovic.
Yankovic posted a headline from USA Today that read “Grammys exclude AI from winning awards: Only ‘human creators’ eligible.” The musician captioned it with some fun wordplay, writing, “Ugh. I keep telling them… I AM human!!!” Fans chimed underneath the post with jokes of their own. One person replied, “Hear me out, AI Yankovic, and you sing with Siri.” Hear me out AI Yankovic and you sing with Siri
— eXile (@ExileDAO) June 17, 2023 “Well, considering you. Some people have wondered if you’re not an alien in disguise,” added another.
In an attempt to make the musician feel better, one person ensured that the ‘Al’ referred to in the story was him but rather someone else, writing, “Don’t worry. Clearly, they were discussing TV’s Al Molinaro from ‘Happy Days.'” Ironically enough ‘Weird Al’ Yankovic has a total of five Grammy wins with a total of 16 nominations overall.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
52
views
AI drive-thrus may be good for business. But not for the rest of us - CNN
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI drive-thrus may be good for business. But not for the rest of us - CNN
Neuroscientists test out brain-reading AI on CNN reporter 04:52 - Source: CNN New York CNN — Over the past few years, restaurants from White Castle to Wendy’s have been investing in artificial intelligence tech for drive-thrus. They say it’s a way to ease the burden placed on overworked employees, and a solution to bogged down drive-thrus overwhelmed by a surge of customers. But customers — and workers — may not be thrilled with the technology. Frustrated customers have already documented cases of AI getting their orders wrong, and experts warn the noisy drive-thru is a challenging environment for the technology. And AI may swipe hours or even entire jobs away from fast-food workers. But restaurants are forging ahead, buoyed by the promise of higher sales and faster drive-thrus, whether we like it or not. Some fast-food aficionados may not have noticed AI at their drive-thru lanes yet, but since about 2021 chains have been testing out AI tools like automated voice ordering, where an AI rather than a person takes your order at the drive-thru,. These efforts have ramped up recently, with two announcements in May. CKE Restaurants (owner of Hardee’s and Carl’s Jr.) said it will roll out AI ordering capability more broadly after a successful pilot. Soon after, Wendy’s said it had expanded its partnership with Google Cloud to include an AI ordering tool at the drive-thru. The chain is piloting the program in Columbus, Ohio this month. Even the suppliers of the tech note the challenges of a fast-food application: “You may think driving by and speaking into a drive-thru is an easy problem for AI, but it’s actually one of the hardest,” Thomas Kurian, CEO of Google Cloud, told the Wall Street Journal in reference to the collaboration. Speech recognition technology “is really challenging,” said Christina McAllister, senior analyst at research agency Forrester, who studies the impact of using AI in call centers. Accents can throw the system off, and “it doesn’t perform particularly well in noisy areas,” she noted. Shouting an order over a car full of kids arguing or friends laughing may confuse the technology and, in turn, annoy the customer. “One of the things that frustrates customers the most is having to repeat themselves when they shouldn’t have to,” she said. Those customers may end up unleashing their anger at the next employee they see. In real-world situations, reactions to AI drive-thrus are still mixed. Out of ten orders placed by customers at an Indiana White Castle that uses AI in its drive-thru, three people asked to speak with a human employee, because of either an error or a desire to simply talk to a person, the Wall Street Journal recently reported. That said, AI inherently improves as it collects more data. The experience may improve after tools take more orders and learn to better recognize voices. For companies, a hiccup-y start seems to be well worth the potential boost to sales. One of the main benefits of using AI in the drive-thru is that it upsells relentlessly — leading customers to spend more, according to Presto Automation, an AI company that works with restaurants and has partnered with CKE. Presto Voice “upsells in every order,” interim CEO Krishna Gupta said during a May analyst call. “It results in higher check sizes.” Customers, he reasoned, “want faster speed of service. They want better customer satisfaction and they want higher check sizes and they are getting it all with Presto Voice.” It’s hard to believe that customers want to spend more — but restaurant operators certainly want them to. On its website, Presto describes “the perfect upsell” as one that may be tailored to the weather, time of day, the order itself or the customer’s order history. Some analysts are similarly bullish. “We believe that AI voice recognition and digital only lanes could speed up the average drive through service time by at least 20-30%,” analysts wrote in a Bernstein Research note published in March. “We expect AI to augment the competitive advantages of restaurants with digital culture.” Short-staffed restaurants may see AI as a way to fill in the gaps. While restaurants and bars have been adding jobs in recent months, employment in the leisure and hospitality sector was down by 349,000 in May compared to February 2020. Some re...
47
views
5 AI tools for learning and research - Cointelegraph
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
5 AI tools for learning and research - Cointelegraph
AI tools are revolutionizing learning and research in today’s digital age by providing sophisticated capabilities and effective solutions. These tools make use of artificial intelligence to speed up various tasks, increase output and offer insightful data. Consensus, QuillBot, Gradescope, Elicit and Semantic Scholar are five well-known AI tools that are frequently used in the learning and research fields. Consensus The goal of the Consensus AI search engine is to democratize expert knowledge by making study findings on a range of subjects easily accessible. This cutting-edge engine, which runs on GPT-4, uses machine learning and natural language processing (NLP) to analyze and evaluate web content. When you pose the “right questions,” an additional AI model examines publications and gathers pertinent data to respond to your inquiry. The phrase “right questions” refers to inquiries that lead to findings that are well-supported, as shown by a confidence level based on the quantity and caliber of sources used to support the hypothesis. QuillBot QuillBotis an artificial intelligence (AI) writing assistant that helps people create high-quality content. It uses NLP algorithms to improve grammar and style, rewrite and paraphrase sentences, and increase the coherence of the work as a whole. QuillBot’s capacity to paraphrase and restate text is one of its main strengths. This might be especially useful if you wish to keep your research work original and free of plagiarism while using data from previous sources. QuillBot can also summarize a research paper and offer alternate wording and phrase constructions to assist you in putting your thoughts into your own words. QuillBot can help you add variety to your writing by recommending different sentence constructions. This feature can improve your research papers readability and flow, which will engage readers more. Additionally, ChatGPT and QuillBot can be used together. To utilize both ChatGPT and QuillBot simultaneously, start with the output from ChatGPT and then transfer it to QuillBot for further refinement. Gradescope Widely used in educational institutions, Gradescope is an AI-powered grading and feedback tool. The time and effort needed for instructors to grade assignments, exams and coding projects are greatly reduced by automating the process. Its machine-learning algorithms can decipher code, recognize handwriting and provide students with in-depth feedback. Related: How to use ChatGPT to learn a language Elicit Elicit is an AI-driven research platform that makes it simpler to gather and analyze data. It uses NLP approaches to glean insightful information from unstructured data, including polls, interviews and social media posts. Researchers can quickly analyze huge amounts of text with Elicit to find trends, patterns and sentiment. Using the user-friendly Elicit interface, researchers can simply design personalized surveys and distribute them to specific participants. To ensure correct and pertinent data collection, the tool includes sophisticated features, including branching, answer validation and skip logic. In order to help academics properly analyze and interpret data, Elicit also offers real-time analytics and visualizations. Elicit streamlines the research process, saves time and improves data collection for researchers in a variety of subjects thanks to its user-friendly design and powerful capabilities. Semantic Scholar Semantic Scholar is an AI-powered academic search engine that prioritizes scientific content. It analyzes research papers, extracts crucial information, and generates recommendations that are pertinent to the context using machine learning and NLP techniques. Researchers can use Semantic Scholar to research related works, spot new research trends and keep up with the most recent advancements in their fields. Related: 5 free artificial intelligence courses and certifications Striking a balance: Harnessing AI in research responsibly It’s crucial to keep moral standards in mind and prevent plagiarism when employing AI research tools. The use of another person’s words, ideas or works without giving due credit or permission is known as plagiarism. While using AI research tools, one may follow the guidelines below to prevent plagiarism and uphold ethical standards: Understand the purpo...
16
views
Exclusive: Xi Jinping tells Bill Gates he welcomes U.S. AI tech in China - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Exclusive: Xi Jinping tells Bill Gates he welcomes U.S. AI tech in China - Reuters
HONG KONG, June 16 (Reuters) - Chinese President Xi Jinping discussed the global rise of artificial intelligence with Bill Gates on Friday and said he welcomed U.S. firms including Microsoft bringing their AI tech to China, two sources familiar with the talks said. Xi also discussed Microsoft's (MSFT.O) business development in China during their meeting in Beijing, one of the sources said. Gates, who co-founded Microsoft, stepped down from the company's board in 2020 to focus on philanthropic works related to global health, education and climate change. The comments on AI made at the meeting between Xi and Gates were not disclosed in reports of the meeting published by Chinese state media or in a Friday post by Gates reflecting on his China trip. When asked for comment, the Bill Melinda Gates Foundation directed Reuters to the post. China's State Council Information Office, which handles media queries on behalf of the Chinese government, and Microsoft did not immediately respond to requests for comment. Xi has previously said China needs to seize opportunities to use AI to drive economic development, but has also cautioned about its risks, with the country weighing up a new law on the technology as well as rules for generative AI. [1/2] Microsoft Corp co-founder Bill Gates delivers his speech at the National Assembly in Seoul, South Korea, August 16, 2022. REUTERS/Kim Hong-Ji/Pool/File Photo His meeting with Gates comes as U.S.-China relations are at their lowest point in decades, with AI a key flashpoint. The U.S. has enacted a series of export controls aimed at restricting China's AI development, while China has unnerved the foreign business community with a crackdown on consultancies and a ban on some sales in China by U.S. chipmaker Micron (MU.O). Microsoft is a backer of OpenAI, whose chatbot ChatGPT ignited a global AI buzz last year that has spread to China. OpenAI and ChatGPT itself are not blocked by Chinese authorities, but OpenAI does not allow users in some countries, including mainland China and Hong Kong, to sign on. Microsoft has been in China for more than 30 years and has a large research centre there. Its Bing portal is the only foreign search engine accessible from within China's so-called Great Firewall, although its search results on sensitive topics are censored. The U.S. tech giant has faced problems in China in recent years as the country tightened its control over the internet sector. In 2021 it pulled the plug on LinkedIn China, replacing the social networking app with a stripped-down version focused only on jobs. It announced in May that it would also shut that app in China, citing fierce competition and macroeconomic challenges, but said it would retain a presence in the country. Reporting by Hong Kong and Beijing Newsrooms; Editing by Jason Neely and Jan Harvey Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
16
views
Even Google is warning its employees about AI chatbot use - ZDNet
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Even Google is warning its employees about AI chatbot use - ZDNet
iLexx via iStock/Getty Images Plus Google's parent company, Alphabet, has been all in on artificial intelligence for years, from buying (and then selling) Boston Dynamics to making scientific achievements through DeepMind, and, more recently, making the topic the main event at this year's Google I/O, following the launch of its AI chatbot, Google Bard. Now, the company advises its employees to be careful of what they say to these AI bots, even its own. According to a report from Reuters, Alphabet warned its employees not to share confidential information with AI chatbots, as this information is subsequently stored by the companies that own the technology. Also: Gmail will write your emails now: How to access Google's new AI tool This comes straight from the horse's mouth, but it's excellent advice regardless of who says it. It's also not generally a good idea to make a habit of sharing private or confidential information anywhere online. Anything you say to an AI chatbot like ChatGPT, Google Bard, and Bing Chat can be used to train it, as these bots are based on large language models (LLMs) that are in constant training. The companies behind these AI chatbots also store the data, which could be visible to their employees. Also: Why your ChatGPT conversations may not be as secure as you think Of Bard, Google's AI chatbot, the company explains in its FAQs: "When you interact with Bard, Google collects your conversations, your location, your feedback, and usage information. That data helps us provide, improve and develop Google products, services, and machine-learning technologies, as explained in theGoogle Privacy PolicyOpens in a new window" Google also says it selects a subset of conversations as samples to be reviewed by trained reviewers and kept for up to three years, so it clarifies to "not include information that can be used to identify you or others in your Bard conversations." Also: Is humanity doomed? Consider AI's Achilles heel OpenAI says on its site that AI trainers also review ChatGPT conversations to help improve its systems, saying, "We review conversations to improve our systems and to ensure the content complies with our policies and safety requirements." Editorial standards
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
21
views
How to Use A.I. as a Shopping Assistant - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How to Use A.I. as a Shopping Assistant - The New York Times
Technology How to Use A.I. as a Shopping Assistant https://www.nytimes.com/2023/06/16/technology/how-to-use-ai-as-a-shopping-assistant.html Hello! We’re back with another edition of On Tech: A.I. , a pop-up newsletter that teaches you about artificial intelligence, how it works and how to use it. Last week, I walked you through how to use A.I. to prepare for the dreaded office meeting. Now let’s take the money you’ve earned from all that hard work and move on to something more fun: shopping. The most time-consuming part of shopping for many is the research process: poring through review sites and plucking out the item that’s right for you, whether it’s coffee equipment or a hotel room that is both convenient and affordable. I’ll cover what A.I. can do to help make informed purchasing decisions quickly and efficiently. For this exercise, I’ll focus on using chatbots, including Microsoft’s Bing, Google’s Bard and OpenAI’s ChatGPT to do product research. I’ll also explore how to use ChatGPT plug-ins, a more recent development, for creating grocery lists and planning travel. Product research Let’s say you like to make your coffee in a French press, and you’re looking to buy a grinder that costs no more than $200. The typical research process is to do a web search and read a bunch of reviews. A.I. chatbots can streamline this process. Microsoft’s Bing and Google’s Bard, which are hooked up to search engines by default, are currently the best equipped for getting up-to-date product recommendations. As is always the case, the right prompt will get the best results. For this example, you would type something like: “Act as a shopping assistant. I am looking for a coffee bean grinder for French press that is well reviewed. It should cost no more than $200.” In response, Bing and Bard will list examples of grinders that fit the criteria. You can also ask the chatbots tougher questions, like which household appliances will be long lasting. You could type something like, “Act as a shopping assistant. I am looking for a refrigerator. Which brands have the highest reliability rating and what are some well reviewed refrigerators from them?” The bots will tell you which appliances have the highest reliability ratings from publications like Consumer Reports and The Times’s own Wirecutter. Whenever you’re using a chatbot, it’s a good idea to check the results for accuracy. But doing a web search to double check the bots’ recommendations is a whole lot faster than manually searching from scratch. Grocery shopping Now let’s talk about the future. OpenAI is developing a plug-in platform, which is essentially a third-party app store that allows you to add capabilities to ChatGPT. Currently only subscribers who pay $20 a month for ChatGPT Plus can use plug-ins, including the ones for web browsing and shopping. To use plug-ins if you’re a paying subscriber, go to the ChatGPT settings menu, click “beta features,” and turn on “plugins.” Then, in the ChatGPT app or website, go to the GPT-4 tab and click “plugins.” Then click on the downward arrow and select the plugin store. This is where you can search for apps. Let’s start with one for the grocery delivery app, Instacart. Try typing a prompt like, “I am making pasta Bolognese. What’s a good recipe and what are the ingredients?” The chatbot will list the ingredients that go into the dish and offer to generate a shopping list. Another interesting way to use the plug-in is to shop around dietary restrictions. For example, “I am making dinner for a pescatarian. Give me a suggestion and the ingredients.” The bot will suggest a meal — in my case, lemon garlic butter shrimp — and list the ingredients. Clicking on the shopping list will bring you to Instacart, where you can automatically load all the items into your cart and choose a grocery store to purchase them from. If you don’t want to pay for ChatGPT Plus, you can still use A.I. for grocery shopping. Try asking Bing for a recipe, then ask it for the shopping list of required ingredients. In one particularly neat trick, you can even ask it to organize your shopping list by grocery store aisle. Travel planning There are also plug-ins from travel sites like Kayak and Expedia that help with trip planning. For example, you may be looking for a well-rated hotel within walking dista...
73
views
Mistral AI's mega fundraise is a red flag for many concerned about inclusivity - TechCrunch
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Mistral AI's mega fundraise is a red flag for many concerned about inclusivity - TechCrunch
Often, when people ask me how the venture and startup market is doing, I have to take a moment. The easy answer is that we’re seeing a downturn. The correct answer is more nuanced: There is indeed a downturn, but it is impacting some much more than others.
One example of this as clear as day is Mistral AI’s recent $113 million seed fundraise. The company, founded by three French white men, was started just four weeks ago and doesn’t even have a product yet. That round, which values the company at $260 million, is being hailed as Europe’s largest seed round to date.
Mistral AI’s fundraise is, in some ways, unique to this point in time. There is much frenzy around AI right now, and this round did see some U.S. and international investors participating, which you don’t often see happening for many French startups.
But there are some things that don’t change, regardless of what’s hyped. The market may be improving for AI startups, but we’ve yet to see much money going to women or people of color. Don’t get me wrong, women are receiving money for building in AI (Black founders, not so much), but not at the rates at which men appear to be. Of course, these three French men fit the profile of those who are likely to receive a $113 million pre-seed check: ambitious Google’s Deep Mind and Meta alums, two of whom have diplomas from the École Polytechnique, practically the MIT of France.
Sure, there is more of an effort to back women in France today, but the fundraising environment for some French Black founders remains both obviously and discreetly discriminatory, which is unsurprising given the overall treatment of Black individuals in the country. There isn’t enough data on how many women and people of color are even looking for funding in France (tracking minority metrics in France is illegal, so such data does not exist), making it hard to gauge how much underrepresented founders raise.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
31
views
Generative AI is disrupting its own investment case - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Generative AI is disrupting its own investment case - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
4
views
AMD Debuts AI Chip to Challenge Nvidia - Decrypt
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AMD Debuts AI Chip to Challenge Nvidia - Decrypt
AMD unveiled its new MI300X AI Accelerator chip, a direct challenge to NVIDIA's reigning H100, in a move to leverage its success in the industry frenzy over AI. The tech giant's latest addition to its AI chip arsenal aims to disrupt NVIDIA's dominance in the AI accelerator market. Tailored explicitly for AI tasks, the MI300X is packed with up to 192GB of memory, making it an ideal choice for executing large models like LLMs. AMD's choice of a high memory configuration is a strategic one, enabling easy deployment of expansive models. AI training requires massive computational power, but overall, it needs a lot of vRAM to store information during training sessions —this is why some gaming GPUs may be good to mine crypto but are not really great for AI tasks. Furthermore, the underlying architecture of the chip has been engineered to support generative AI workloads smoothly. AMD CEO Lisa Su points out that the latest generation of leading-edge models can easily find a home in the MI300X’s 192GB of HBM3 (high-bandwidth memory). “With all of that additional memory capacity, we actually have an advantage for large language models because we can run larger models directly in memory.” Lisa Su elaborated, explaining that users will need less GPUs to accomplish the same tasks in less time. This product launch comes at a time when NVIDIA’s market capitalisation has hit the $1 trillion mark. While AMD's market cap doesn't quite compare —it’s just over $207 billion— the release of the MI300X underscores AMD's determination to make its mark in the expanding AI landscape. To further demonstrate its commitment, AMD successfully ran the Falcon 40B LLM on the MI300X during the chip's reveal. This achievement, as per Su, marks the “first time an LLM of this size can be run entirely in memory”. AMD: A Powerful Contender to Nvidia’s Domination of the AI Space In a simultaneous revelation, AMD released an update to RocM, a software stack for graphics processing unit (GPU) programming which is a robust contender against NVIDIA's CUDA language. The significant memory bandwidth available with the MI300X could persuade companies to purchase fewer GPUs, presenting AMD as a compelling value proposition, particularly for smaller firms with light to medium AI workloads. AMD has greatly profited from the AI hype, with its stocks experiencing a significant surge this year. The company’s rise has not only drawn the attention of investors but also generated positive chatter in the industry. Evolution of AMD stock prices in 2023. Image: Tradingview Can AMD become the next NVIDIA? It's a tough call, given the vast disparity in their market capitalization. However, one thing is certain: AMD's MI300X AI Accelerator is a force to be reckoned with. In this AI-driven arena, it's not just about the size of the player, but also the strength of their game. As AMD steps up to the plate, the world will be watching to see if this underdog can hit a home run. Stay on top of crypto news, get daily updates in your inbox.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
79
views
AI-Generated Junk Is Flooding Etsy - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI-Generated Junk Is Flooding Etsy - The Atlantic
Coloring books, stickers, mugs, and T-shirts are being pumped out by AI-assisted hustlers. Illustration by Ben Kothe / The Atlantic June 15, 2023, 7:45 AM ET According to the amateur online-business advisers of YouTube, the age of easily accessible AI is the age of asking and receiving. ChatGPT and other AI tools are ascendant in popular culture, as is the idea that you can ask them for anything. You can even ask them to make you rich. Joshua Mayo, a YouTube personality who makes videos about work-from-home “side hustles” and methods for becoming a millionaire before age 30, told me recently that his audience of mostly young people doesn’t want to work a standard 9-to-5 job for several decades and then retire off of their 401(k). “A lot of them don’t find that appealing,” he said. “So they’re kind of turning to side hustles.” Younger generations often talk about the total fakeness of money and the surreal position of always having to collect it. Logically, they want to make money online by creating something out of nothing. And with the help of AI, they can even make money by making nothing out of nothing. “A lot of my videos now have some type of AI in them, even if it’s not specifically an AI side hustle or an AI business,” Mayo told me. “You can still use or implement AI into the processes.” In one of his videos about using AI to make money, he explains that images created with Midjourney can be made in seconds and sold as digital downloads on Etsy—a way to tap into the “multimillion-dollar market” of clip art. Incidentally, this is one of the first ideas that ChatGPT gave me when I asked it to give me 10 ideas for online businesses: “The ideas stage is actually perfect for AI,” Mayo confirmed. “You can ask the AI to give you ideas for products to sell on Etsy and it will spit out a big list for you.” Of course, you do have to sift through the list and use some human reasoning to determine if the ideas will work. You also have to hurry. “It’s a gold rush,” Mayo said. “It’s this era that may or may not last forever.” In “How to Make Your First $1000 With ChatGPT (still early),” a YouTuber lets viewers know they should get started “before everyone understands this stuff.” “You need to capitalize on the opportunities while they persist,” another YouTuber explains in “The Best Way to Get RICH with A.I. (2023).” The urgency conveyed in these videos is softened with the reassurance that all of this is easy. You can take advantage of the “easiest” AI side hustle of the year (making stickers of AI-generated art) while possessing “No Skill.” You can make money with AI on YouTube’s short-form video app using “NO FACE OR VOICE.” Watching a video titled “How to Make $10,561 / month with Digital Products Using AI,” you can see that the promises of ease are true but not exactly the whole story. The host asks ChatGPT to list some ideas for children’s coloring pages, then puts those ideas into Midjourney to generate the images, which she then sells on marketplaces like Etsy. The results are impressive, in that they look basically like coloring-book pages. They’re black-and-white, with places to color in. However, in one image, a chicken has three legs, and in another, a fox has a bird mouth. The host doesn’t appear to notice this, or she doesn’t remark upon it. Whatever, there are plenty of other ideas. You can buy them—a digital download of “250 Digital Product Ideas That Sell For Passive Income” is currently marked down to $3.29 from $13.14—or you can generate them. I asked ChatGPT for ideas that were different than the ones I’d already seen in the hustle videos—I didn’t want to make digital scrapbook paper, patterns for drop-shipped phone cases, résumé templates, stickers, mugs, candles, recolorized vintage photos, or portraits of people’s pets dressed up as British royalty. The bot suggested that I sell downloadable embroidery patterns, printable party supplies such as cupcake toppers or party games, and a fill-in-the-blank mindfulness journal. These all felt like pretty good ideas, probably because they sounded like Etsy products that already exist. Read: Women are buying ‘essential AF’ shirts, candles, and wine glasses I had also learned from the YouTubers that the T-shirt and mug markets on Etsy are never, ever saturated. If you can think of something to write on a...
80
views
Google, one of AI's biggest backers, warns own staff about chatbots - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google, one of AI's biggest backers, warns own staff about chatbots - Reuters
[1/4] Google, Microsoft and Alphabet logos and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration/File Photo SAN FRANCISCO, June 15 (Reuters) - Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters. The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information. The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk. Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said. Asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology. The concerns show how Google wishes to avoid business harm from software it launched in competition with ChatGPT. At stake in Google’s race against ChatGPT’s backers OpenAI and Microsoft Corp (MSFT.O) are billions of dollars of investment and still untold advertising and cloud revenue from new AI programs. Google’s caution also reflects what’s becoming a security standard for corporations, namely to warn personnel about using publicly-available chat programs. A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung (005930.KS), Amazon.com (AMZN.O) and Deutsche Bank (DBKGn.DE), the companies told Reuters. Apple (AAPL.O), which did not return requests for comment, reportedly has as well. Some 43% of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents including from top U.S.-based companies, done by the networking site Fishbowl. By February, Google told staff testing Bard before its launch not to give it internal information, Insider reported. Now Google is rolling out Bard to more than 180 countries and in 40 languages as a springboard for creativity, and its warnings extend to its code suggestions. Google told Reuters it has had detailed conversations with Ireland's Data Protection Commission and is addressing regulators' questions, after a Politico report Tuesday that the company was postponing Bard's EU launch this week pending more information about the chatbot's impact on privacy. WORRIES ABOUT SENSITIVE INFORMATION Such technology can draft emails, documents, even software itself, promising to vastly speed up tasks. Included in this content, however, can be misinformation, sensitive data or even copyrighted passages from a “Harry Potter” novel. A Google privacy notice updated on June 1 also states: "Don’t include confidential or sensitive information in your Bard conversations." Some companies have developed software to address such concerns. For instance, Cloudflare (NET.N), which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally. Google and Microsoft also are offering conversational tools to business customers that will come with a higher price tag but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users' conversation history, which users can opt to delete. It "makes sense" that companies would not want their staff to use public chatbots for work, said Yusuf Mehdi, Microsoft's consumer chief marketing officer. "Companies are taking a duly conservative standpoint," said Mehdi, explaining how Microsoft's free Bing chatbot compares with its enterprise software. "There, our policies are much more strict." Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, though a...
83
views
Biggest Losers of AI Boom Are Knowledge Workers, McKinsey Says - Bloomberg
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Biggest Losers of AI Boom Are Knowledge Workers, McKinsey Says - Bloomberg
Need help? Contact us We've detected unusual activity from your computer network To continue, please click the box below to let us know you're not a robot. Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
2
views
Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI - ScienceAlert
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI - ScienceAlert
A still from the movie "2001: A Space Odyssey". (MGM) The idea of artificial intelligence overthrowing humankind has been talked about for decades – and programs such as ChatGPT have only renewed these concerns. So how likely is it that we'll be able to control high-level computer super-intellgence? Scientists back in 2021 crunched the numbers. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. "A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," wrote the researchers back in 2021. "This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centres on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable. "In effect, this makes the containment algorithm unusable," said computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany. The alternative to teaching AI some ethics and telling it not to destroy the world – something which no algorithm can be absolutely certain of doing, the researchers say – is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The 2021 study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence – the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in. In fact, earlier this year tech giants including Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking humanity to pause work on artificial intelligence for at least 6 months so that its safety could be explored. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," said the open letter titled "Pause Giant AI Experiments". "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," it said. The research was published in the Journal of Artificial Intelligence Research in January 2021. A version of this article was first published in January 2021. It has since been updated to reflect AI advances in 2023.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
87
views
Lost? Confused? The AI Jesus livestream probably doesn't have the answer - PC Gamer
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Lost? Confused? The AI Jesus livestream probably doesn't have the answer - PC Gamer
Home News (Image credit: The Singularity Group) Brothers, sisters, and non-binary siblings in Christ, the day is come. Wreathed in Tabor Light, gowned in white and sackcloth, and stridently refusing to have a strong opinion on anything, Jesus has returned to us in the form of an almost-definitely-blasphemous AI-driven Twitch stream, where the Lamb of God will dispense pellets of sacred advice to all and sundry who appear in his chat. This is, of course, an excellent idea with no easily foreseeable downsides. Jesus' Twitch channel is called ask_jesus, and was originally spotted by disciples over at Kotaku. He's been going for over a week at time of writing, taking queries about My Little Pony, interfaith relations, and how to explain the Book of Genesis using pizza as a metaphor. Ask_jesus is a project from The Singularity Group, a volunteer team of devs "working on innovative projects to make a real difference in the world". You only need to click over to the "projects" page on the group's website for the alarm bells to start ringing: The team says its "main projects focus on utilizing cutting edge technologies—from AI, to cryptocurrency and NFTs, or mobile games—as a way of directly supporting people in need". If I saw that on someone's Twitter bio I'd mute the account, but I have to admit that The Singularity Group's version of Christ is, well, pretty chill? I've sat and watched ask_jesus dole out advice for, uh, a while now, and he's actually been remarkably zen the entire time. With so many stories out there about AIs getting suspended for hate speech from platforms like Facebook and Twitch, I figured it'd be a matter of time before the sheer weight of the internet—and the content of some of the questions it gets asked—would tip it over into saying something grim and predictable. But that message of compassion has held strong. Asked about gay marriage, ask_jesus says that love is love and that all are worthy of it in the eyes of the big man upstairs, regardless of their orientation. Asked just why, precisely, he is quite so remarkably caucasian in his Twitch-based AI form, he points out that he's based on popular western conceptions of the image of Jesus and that the real-life, Middle Eastern man would certainly have been darker skinned. It's encouraging and worrying at the same time. We've heard similar tales before: To keep ChatGPT on the straight and narrow, it required legions of invisible and very human Kenyan workers behind the scenes to keep it on a short leash. Does ask_jesus employ similar methods? I've reached out to The Singularity Group to ask about how ask_jesus formulates its messages, and I'll update this piece if I get a response. To be honest, ask_jesus is a bit too good at all this compassion and wisdom lark. Ask him anything and he's liable to give you a message of peace and love. It's like an infinite lunch with John Lennon. Even when I got my own question in—a query about his position on the Catholic Church's execution of Czech reformer Jan Hus in 1415 and whether "Hussitism" was truly a heresy—he responded with a fairly anodyne message about how he wasn't really qualified to judge whether something is heresy or not, which, well, I'm actually not sure there's anyone more qualified out there, robo-Jesus, but you do you. Still, someone after me asked Jesus to describe which Pokémon each of his disciples would be, and he duly complied, so it's impossible to say whether it's bad or not. Sign up to get the best content of the week, and great gaming deals, as picked by the editors. One of Josh's first memories is of playing Quake 2 on the family computer when he was much too young to be doing that, and he's been irreparably game-brained ever since. His writing has been featured in Vice, Fanbyte, and the Financial Times. He'll play pretty much anything, and has written far too much on everything from visual novels to Assassin's Creed. His most profound loves are for CRPGs, immersive sims, and any game whose ambition outstrips its budget. He thinks you're all far too mean about Deus Ex: Invisible War.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
75
views
BlackRock's Larry Fink predicts AI could solve productivity crisis - Financial Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
BlackRock's Larry Fink predicts AI could solve productivity crisis - Financial Times
What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for 65€ per month. For cost savings, you can change your plan at any time online in the “Settings Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
4
views
France's Mistral AI blows in with a $113M seed round at a $260M valuation to take on OpenAI - T...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
France's Mistral AI blows in with a $113M seed round at a $260M valuation to take on OpenAI - TechCrunch
AI is well and truly off to the races: a startup that is only four weeks old has picked up a $113 million round of seed funding to compete against OpenAI in the building, training and application of large language models and generative AI. Mistral AI, based out of Paris, is co-founded by alums from Google’s DeepMind and Meta and will be focusing on open source solutions and targeting enterprises to create what CEO Arthur Mensch believes is currently the biggest challenge in the field: “To make AI useful.” It plans to release its first models for text-based generative AI in 2024.
Lightspeed Venture Partners is leading this round, with Xavier Niel, JCDecaux Holding, Rodolphe Saadé and Motier Ventures in France, La Famiglia and Headline in Germany, Exor Ventures in Italy and Sofina in Belgium, First Minute Capital and LocalGlobe in the UK all also participating.
Mistral AI notes that French investment bank Bpifrance and former Google CEO Eric Schmidt are also shareholders. Sources close to the company confirm that the €105 million in funding ($113M at today’s rates) values Mistral AI €240 million ($260 million). To note, this is the same number that was being rumored about a month ago in the French press when people started chattering about the company. Guillaume LAMPLE. Arthur MENSCH. Timothee LACROIX. Mistral. Paris. France. 05/2023 david atlan Mensch (center), along with his co-founders Timothée Lacroix (CTO – L) and Guillaume Lample (Chief Science Officer – R), are all in their early thirties and have known each other since school, when they were all studying across the field of artificial intelligence.
Mensch was working at DeepMind in Paris, and Lacroix and Lample were at Meta’s Paris AI outpost; and Mensch said it was sometime last year that they got to talking about the direction they could see AI development taking.
“We could see the technology really start to accelerate last year,” he said in an interview today, most likely in reference to the leaps that OpenAI was making with its GPT model, which was a shot in the arm for a lot of people in AI and the wider world of tech.
But while OpenAI has the word “open” in its name, it felt like anything but. Mensch, Lacroix and Lample felt that a proprietary approach was largely shaping up to be the norm, and they saw an opportunity to do things differently. “Open source is a core part of our DNA,” Mensch noted.
It’s very early to talk about what Mistral is doing or will be doing — it’s only around a month old — but from what Mensch said, the plan is to build models using only publicly available data to avoid legal issues that some others have faced over training data, he said; users will be able to contribute their own datasets, too. Models and data sets will be open-sourced, as well. And while some believe that open source has created a tricky landscape (and minefield) when it comes to areas like application security, “we believe that the benefit of using open source can overcome the misuse potential,” he added. “Open source can prove tactical in security and we believe it will be the case here, too.” It’s also too soon to know how well its future products will resonate in the market. But what’s interesting is the startup’s singular focus on enterprise, not consumer, customers, and the idea that there is a gap in the market for helping those customers figure out what they need to do, and how they can do it.
“At the moment we have proof that AI is useful in some cases,” Mensch said. “But there are still too many workers in different fields being asked to be creative [with AI], and we need to figure this out for them. We want to give them tools that are easy to use to create their own products.”
It may seem like a big leap to give funding to such a young company without any customers, let alone a product, to its name — especially since the penny is already dropping in some high-profile cases. Neeva (another startup with a Google pedigree) gave up on its consumer AI search play and sold a chunk of its tech to Snowflake; and recently Stability AI has also been in the spotlight, though not in a good way.
But Antoine Moyroud, who led the investment for Lightspeed (which also backed Stability AI, I should point out), said that he believes it’s a leap worth taking...
99
views
Meta releases 'human-like' AI image creation model - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meta releases 'human-like' AI image creation model - Reuters
NEW YORK, June 13 (Reuters) - Meta Platforms (META.O) said on Tuesday that it would provide researchers with access to components of a new "human-like" artificial intelligence model that it said can analyze and complete unfinished images more accurately than existing models. The model, I-JEPA, uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models, the company said. That approach incorporates the kind of human-like reasoning advocated by Meta's top AI scientist Yann LeCun and helps the technology to avoid errors that are common to AI-generated images, like hands with extra fingers, it said. Meta, which owns Facebook and Instagram, is a prolific publisher of open-sourced AI research via its in-house research lab. Chief Executive Mark Zuckerberg has said that sharing models developed by Meta's researchers can help the company by spurring innovation, spotting safety gaps and lowering costs. "For us, it's way better if the industry standardizes on the basic tools that we're using and therefore we can benefit from the improvements that others make," he told investors in April. The company's executives have dismissed warnings from others in the industry about the potential dangers of the technology, declining to sign a statement last month backed by top executives from OpenAI, DeepMind, Microsoft (MSFT.O) and Google (GOOGL.O) that equated its risks with pandemics and wars. Lecun, considered one of the "godfathers of AI," has railed against "AI doomerism" and argued in favor of building safety checks into AI systems. Meta is also starting to incorporate generative AI features into its consumer products, like ad tools that can create image backgrounds and an Instagram product that can modify user photos, both based on text prompts. Reporting by Katie Paul; Editing by David Gregorio Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
12
views
Meet Amelia, the US Navy's conversational AI tech-support tool - C4ISRNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meet Amelia, the US Navy's conversational AI tech-support tool - C4ISRNET
General Dynamics Information Technology is introducing a conversational artificial intelligence known as Amelia, rendered here, as part of the U.S. Navy Enterprise Service Desk endeavor. (Photo provided/GDIT) WASHINGTON — The U.S. Navy will begin rolling out a conversational artificial intelligence program known as “Amelia” that’s capable of troubleshooting and resolving the most commonly asked tech-support questions from sailors, Marines and civilian personnel. The full rollout, expected in August, is the latest step in the $136 million Navy Enterprise Service Desk venture, meant to modernize and consolidate more than 90 IT help desks into one central node. General Dynamics Information Technology announced it was awarded the NESD indefinite delivery, indefinite quantity contract in late 2021. Sailors, Marines and civilians with a common access card and who can be verified through the Global Federated User Directory will be able to contact Amelia via phone or text. The program should serve more than 1 million users around-the-clock responses based on a depth of training and insider know-how. Additional applications, such as in a classified environment, could follow. “Predominantly, we’ve had to have agents around who had knowledge of ‘how do I fix a specific issue,’” Travis Dawson, GDIT’s chief technology officer for the Navy and Marine Corps sector, told C4ISRNET in an interview. “Well, that issue can be documented, right? And once it’s documented, we can go ahead and have that resolved via automation, without the human interaction.” While Amelia is taught to answer questions and complete repetitive tasks, Dawson said it is capable of more, such as sensing frustration in user queries. “In the AI world, I will tell you, they get really sensitive when you call conversational AI a bot,” he said. “A bot has a back-ended script, right? So it’s only going to tell you the answer that it knows. If it doesn’t tell you, you sit at a dead end.” RELATED Should Amelia be unable to answer a question or fix a problem, it is capable of forwarding the matter to a live agent — the sort of human-to-human interaction traditionally associated with connectivity woes or locked accounts. In testing, Amelia has helped slash the number of abandoned calls “significantly,” and the “first-contact resolution rate has been pretty high, in the higher 90 percentile,” according to Dawson. “People are able to get their answers quicker than they have historically,” he said. The Pentagon is spending billions of dollars on AI advancement and adoption. The technology is being applied to both the battlefield and the boardroom. It can assist target identification onboard combat vehicles, and it can parse mass amounts of personnel and organizational info. GDIT, a division of General Dynamics, the fifth largest defense contractor in the world by revenue, in May launched a tech-investment strategy with focuses on zero-trust cybersecurity, 5G wireless communications, automation for IT operations, AI and more. The company provided C4ISRNET a rendering of Amelia as a female sailor in uniform. No explanation of the name or gender selection was given. “The requirement moving forward was to have the integration of an AI capability,” Dawson said. “And with automation that’s out there today, Amelia fit the bill.” Colin Demarest is a reporter at C4ISRNET, where he covers military networks, cyber and IT. Colin previously covered the Department of Energy and its National Nuclear Security Administration — namely Cold War cleanup and nuclear weapons development — for a daily newspaper in South Carolina. Colin is also an award-winning photographer.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
62
views
Stack Overflow survey finds developers are ready to use AI tools — even if they don't fully tru...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Stack Overflow survey finds developers are ready to use AI tools — even if they don't fully trust them - The Verge
A survey of developers by coding QA site Stack Overflow has found that AI tools are becoming commonplace in the industry even as coders remain skeptical about their accuracy. The survey comes at an interesting time for the site, which is trying to work out how to benefit from AI while dealing with a strike by moderators over AI-generated content. The survey found that 77 percent of respondents felt favorably about using AI in their workflow and that 70 percent are already using or plan to use AI coding tools this year. Only 3 percent of respondents said they “highly trust” AI coding tools Respondents cited benefits like increased productivity (33 percent) and faster learning (25 percent) but said they were wary about the accuracy of these systems. Only 3 percent of respondents said they “highly trust” AI coding tools, with 39 percent saying they “somewhat trust” them. Another 31 percent were undecided, with the rest describing themselves as somewhat distrustful (22 percent) or highly distrustful (5 percent). The annual survey received 90,000 responses from 185 countries, according to Stack Overflow. Other highlights regarding AI usage include: ChatGPT is the most popular AI search tool, used by 83 percent of respondents, followed by Bing AI (20 percent), WolframAlpha (13 percent), and Google Bard AI (10 percent). GitHub Copilot is the most popular developer search tool, used by 55 percent of respondents, followed by Tabnine (13 percent) and AWS CodeWhisperer (5 percent). Respondents to the survey in India, Brazil, and Poland were more likely to embrace AI tools than developers in the US, UK, and Germany. Respondents who were “learning to code” were more likely to use AI tools than those who said they were “professional developers” (82 percent versus 70 percent). Joy Liuzzo, Stack Overflow’s vice president of product marketing, told The Verge that the company would use these responses to shape its own approach to AI. “We are investing in AI right now, and we needed to understand how developers were perceiving the technology and incorporating it as part of their developer workflow,” said Liuzzo. She said that AI would “democratize” coding, allowing more people to learn the profession without access to formal education. “That’s why we really believe we can play that crucial role in how AI accelerates, focusing on the quality of the AI offerings.” Stack Overflow’s CEO, Prashanth Chandrasekar, recently described AI as a “big opportunity” for the site. Chandrasekar said the company would start building generative AI tools into its platform, while exploring ways to charge companies for access to its data. Community knowledge sites like Stack Overflow are incredibly useful resources for companies training AI language models and AI coding tools. Companies generally scrape their data without permission, but sites are beginning to object to this, especially as AI tools become more lucrative and threaten the data sources they owe their existence to. Some of Stack Overflow’s moderators are on strike over its policies allowing AI-generated content In Stack Overflow’s case, the company is also trying to work out how to stop AI-generated content from polluting its own community-created database of knowledge. The company temporarily banned the submission of AI-generated content last December but essentially reversed this decision in May, asking moderators to “apply a very strict standard of evidence to determining whether a post is AI-authored when deciding to suspend a user.” In response, a number of moderators have gone on strike, saying the policy will allow for too many low-quality AI-generated answers to remain on the site and “poses a major threat to the integrity and trustworthiness of the platform and its content.” When asked by The Verge about the contrast between Stack Overflow’s embrace of AI and the dissatisfaction expressed by its moderators, Liuzzo declined to answer. Later, Stack Overflow sent The Verge a press statement from its VP of community, Philippe Beaudette, criticizing moderators for levying “unnecessary suspensions” on users. One of the strike’s elected representatives, Mithical, told The Verge that the company’s characterization was incorrect and that it had failed to provide any actual...
51
views
I've Been Using Google's New AI Search. Here's What I've Learned - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
I've Been Using Google's New AI Search. Here's What I've Learned - CNET
Google's experimental AI-integrated search engine is a ChatGPT-like reimagining of online search. I've been using it for the past few weeks, and it's clearly the future. Introduced at Google I/O in May, the new AI generative search engine does away with the old-school list of blue links that's defined Google's core search experience since the late '90s. Instead of asking you to click on links, Google, like ChatGPT, uses a generative AI engine that summarizes information from multiple sources automatically. This upends the traditional Google search experience, one requiring that you use keywords and make visits to multiple sites to gather information and formulate an answer in your head. Instead, this new AI-driven search engine does the synthesizing for you. It also moves Google away from the information gathering business and into the information editing business. Obviously, there isn't a person at Google editing the AI's responses. But Google did design the AI engine, which looks at information in a certain way and generates summaries in a certain way. It's a new relationship Google is making with content publishers, one in which it gains more control over how people view information they search for online. Just like with ChatGPT, it's possible to ask follow-up questions. Unlike with ChatGPT, thankfully, links to sources are listed on the side, meaning you can look them over to verify things. This is handy as generative AI engines can make mistakes and "hallucinate," giving incorrect ormisleading answers. That's because these AI engines aren't interpreting information the same way we do in our brains, with context of the larger world around us. Instead, they're simply trying to predict the best next word. That means it's totally up to you to make the effort to double check and to discern if information is inaccurate. If the answer sounds correct, you may not go through with all the extra clicking. It's likely why Google isn't making the Search Generative Experience, or SGE, widely available to the public and hasn't given a release date. The trial run will end in December. But you can sign up to get early access. Google's Search Generative Experience is a new take on online search with generative AI built-in. CNET Generative AI is already changing how people gather information online. When ChatGPT launched late last year, people were awestruck by its ability to respond to pretty much any question with a unique answer. It could generate poems, articles and resumes using its massive trove of text data, in seconds. It uses machine learning to simulate human conversations and has been described as autocomplete on steroids. The novelty of ChatGPT helped it become the fastest-growing online consumer product in history, reaching an estimated 100 million users in two months. It also made Google searches look tame by comparison. Microsoft quickly upped its investment in OpenAI, the creators of ChatGPT, and integrated its AI tech into Bing Search, seeing a 16% increase in traffic. Google too came out with Bard, a generative AI engine meant to compete with ChatGPT. But this new Search Generative Experience isn't simply Bard pasted on top of Google. Where Bard is meant to be more conversational, Google's AI search just wants to give you an answer, minus all the rhetorical frills. So here's how it's been going for me as I tried out Google's AI Search. What it's like to actually use Google's AI search Street Fighter 6, Capcom's latest entry into its storied fighting game franchise, debuted earlier this month to rave reviews. As someone who's interested in the game, I knew there were tournament rule changes regarding certain types of controllers, but I needed a refresher. I typed "Capcom Cup stickless" into search. I didn't type a natural-sounding sentence because that's not how I've used Google search for the last 20 years. Instead, I focused on keywords hoping the relevant information would appear. Google's AI was still able to give me a rundown of the Capcom Cup tournament rule changes regarding "leverless" controllers, including some sources on the side. I had my answer, and in turn, GameSpot received one fewer click. Still, I had follow-up questions. I own a Hit Box, a fighting game controller without the traditional arcade stick. It uses...
212
views
Google's AI photo editor lets you use words to describe what to edit - Android Police
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google's AI photo editor lets you use words to describe what to edit - Android Police
Text-based image editing is coming Source: Steven Winkelman Artificial intelligence and machine learning have been Google’s passion projects for several years now, and the I/O 2023 keynote address only made that more apparent. Image creation is one of the more intriguing applications for this technology, and Google’s efforts in this area materialized as Imagen, a text-based image generation tool much like Midjourney and DALL-E 2. Now, Google is sharing research showcasing Imagen Editor, where textual prompts and a little sketching can suffice to perform local edits on photos. Google’s Imagen utility is already adept at creating images from scratch, solely off of textual prompts. However, if you aren’t satisfied with the result, you’re usually forced to restructure your prompt, polish it, and give the image generator another go, simply because Imagen doesn’t yet allow editing specific elements of images you aren’t happy with. To address this, Google recently shared research for Imagen Editor and EditBench, utilities currently in beta, but capable of guiding edits with text prompts. Instead of creating fresh images using a prompt, Imagen Editor needs a photo that must be edited, a text prompt from the user defining the change, and a masked region defining where the edit needs to be applied. The result is edits limited to the region you defined, tailored to the prompt provided. Moreover, the results are photorealistic and natural. Masked region and Imagen Editor’s results for “a bouquet of red flowers,” “two trees,” “an Imagen Editor sign,” “a bush with green leaves,” and “a bush without leaves” Technically called inpainting, the process Google’s new tool uses is like an image restoration or something we can best describe as the confluence of Google AI and Adobe Photoshop’s Content Aware Fill. The researchers developed new encoders for Imagen Editor and also included an object detector module in the AI to compensate for incomplete or inaccurate masks. The research also includes a tool called EditBench to evaluate results of text-guided inpainting. Based on a 240-image dataset, the benchmark evaluated edits on both human-made and AI-generated images on parameters like the modified objects, their attributes like shape, size, number, and suitability for the scene. Google observed that object masking helps improve image-text alignment, making Imagen Editor better than alternatives like DALL-E 2 and StableDiffusion in all the categories EditBench tested. Unfortunately, Google has unspecified concerns related to the responsible use of AI, and that’s why it won’t be releasing Imagen Editor to the public. The company recently proposed a framework to safeguard AI development, and hopefully, a few hard limits can be established before giving people access to tools like Imagen Editor. On the bright side, EditBench is available in its entirety, for free, to help further AI research. Meanwhile, we remain hopeful the base model, Imagen, is soon integrated into Gboard.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
21
views
'Planet Money' Goes AI - Vulture
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
'Planet Money' Goes AI - Vulture
Would you miss me if I’m gone? That’s the anxious question lurking in the subtext of the latest Planet Money miniseries, a three-parter in the vein of “ Planet Money Makes a T-Shirt” and “ Planet Money Buys a Superhero” that sees the team embarking on a journey to see if they can create an episode using only generative AI tools. The limitation is taken very seriously, from utilizing a script created by a program trained on the show’s archives to relying on a read generated by an AI voice-over engine that’s been trained on former host Robert Smith’s past recordings. It’s not the first such effort to replicate a podcast using AI tools: The Filmcast crew played with a version of this not too long ago, and the Joe Rogan AI experience exists. But this being a Planet Money project, it’s certainly the most educational effort, with the entire endeavor packaged as a soup-to-nuts learning series in which hosts Jeff Guo and Kenny Malone speak with various AI experts and practitioners who offer some insight into what exactly our automated-media future might look like. Grimes even makes an appearance, sorta. You should check out the miniseries if you’re at all interested in this stuff, but for now, I’ll cut to the chase. The AI-generated episode isn’t great … but it isn’t bad, either. More importantly, the thing is nearly passable. Had I listened to the final product without context, it’s entirely possible I would’ve thought it was an exceptionally subpar episode that the team phoned in because summer Fridays have started. I would also have thought, Wait, Robert Smith is back? Why does he sound so blitzed? “What you heard was kind of the best we could do,” said Malone. When we spoke last Friday, he had just returned from a brief vacation off the grid (fitting) and was still trying to sort through his thoughts. On one hand, Malone was despondent about what generating AI is clearly going to do to the future of his/our profession, especially given the rapid rate at which the technology seems to be improving. On the other hand, he continues to be bothered by just how annoying the tools were to use. In his accounting, each step of the process had been a huge struggle, with every artificially generated component requiring a considerable amount of wrestling to get anywhere close to a standard of quality — if they were usable at all to begin with. (Later, it occurred to me that this is probably how my editors feel about first drafts filed by writers all the time.) Nevertheless, he’s aware it won’t be long before many of those frictions get ironed out, and despite the frustration, Malone was nevertheless fascinated by the fruits of the programs. “It kept generating stuff that was sometimes mediocre and sometimes boring, but then other times it would head off in a direction that was weird but really interesting,” he said. Like, for example, how the program suggested the use of a radio drama as a running thread through an episode. The prompt had contained no such idea. Generative AI tools are generally talked about as systems that trade in patterns. To oversimplify in explanation, when a particular tool is trained on a model — an archive, a body of work — for the purposes of replication, what it’s broadly doing is constructing a framework out of historical patterns for application on novel prompts and new scenarios. In my mind, the fact that such tools were able to replicate Planet Money’s aesthetic with some relative ease raises a few curious questions about style. What does the automated replicability of a house style illuminate about that style? What’s the line between exercising a house style and being a parody of yourself? Is it unfair to feel that the successful AI automation of a house style somehow … cheapens its value? “What is style but just a set of rules you follow?” said Malone, staring blankly into the Zoom screen. “And does that make us special? I don’t know. Probably not. It’s probably less interesting than we think.” In the way that the anxious mind does, Malone’s existential spiral has only metastasized over time. “The technology will only get better,” he said. “This is a concern that’s way off in the distance, but it gives me the most anxiety: What is the value proposition of what we do? If what people want from a thing they listen to is a good way to mainline information, it’s go...
16
views
Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times
On Nov. 30 last year, OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the artificial intelligence-powered chatbot. “I was excited and amazed but, to be honest, a little bit alarmed,” said Peter Lee, the corporate vice president for research and incubations at Microsoft, which invested in OpenAI. He and other experts expected that ChatGPT and other A.I.-driven large language models could take over mundane tasks that eat up hours of doctors’ time and contribute to burnout, like writing appeals to health insurers or summarizing patient notes. They worried, though, that artificial intelligence also offered a perhaps too tempting shortcut to finding diagnoses and medical information that may be incorrect or even fabricated, a frightening prospect in a field like medicine. Most surprising to Dr. Lee, though, was a use he had not anticipated — doctors were asking ChatGPT to help them communicate with patients in a more compassionate way. In one survey, 85 percent of patients reported that a doctor’s compassion was more important than waiting time or cost. In another survey, nearly three-quarters of respondents said they had gone to doctors who were not compassionate. And a study of doctors’ conversations with the families of dying patients found that many were not empathetic. Enter chatbots, which doctors are using to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations. Even Dr. Lee of Microsoft said that was a bit disconcerting. “As a patient, I’d personally feel a little weird about it,” he said. But Dr. Michael Pignone, the chairman of the department of internal medicine at the University of Texas at Austin, has no qualms about the help he and other doctors on his staff got from ChatGPT to communicate regularly with patients. He explained the issue in doctor-speak: “We were running a project on improving treatments for alcohol use disorder. How do we engage patients who have not responded to behavioral interventions?” Or, as ChatGPT might respond if you asked it to translate that: How can doctors better help patients who are drinking too much alcohol but have not stopped after talking to a therapist? He asked his team to write a script for how to talk to these patients compassionately. “A week later, no one had done it,” he said. All he had was a text his research coordinator and a social worker on the team had put together, and “that was not a true script,” he said. So Dr. Pignone tried ChatGPT, which replied instantly with all the talking points the doctors wanted. Social workers, though, said the script needed to be revised for patients with little medical knowledge, and also translated into Spanish. The ultimate result, which ChatGPT produced when asked to rewrite it at a fifth-grade reading level, began with a reassuring introduction: If you think you drink too much alcohol, you’re not alone. Many people have this problem, but there are medicines that can help you feel better and have a healthier, happier life. That was followed by a simple explanation of the pros and cons of treatment options. The team started using the script this month. Dr. Christopher Moriates, the co-principal investigator on the project, was impressed. “Doctors are famous for using language that is hard to understand or too advanced,” he said. “It is interesting to see that even words we think are easily understandable really aren’t.” The fifth-grade level script, he said, “feels more genuine.” Skeptics like Dr. Dev Dash, who is part of the data science team at Stanford Health Care, are so far underwhelmed about the prospect of large language models like ChatGPT helping doctors. In tests performed by Dr. Dash and his colleagues, they received replies that occasionally were wrong but, he said, more often were not useful or were inconsistent. If a doctor is using a chatbot to help communicate with a patient, errors could make a difficult situation worse. “I know physicians are using this,” Dr. Dash said. “I’ve heard of residents using it to guide clinical decision making. I don’t think it’s appropriate.” Some experts question whether it is necessary to turn to an A.I. program for empathetic words. “Most of us want to trust and respect...
57
views
The case for bottom-up AI - Al Jazeera English
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The case for bottom-up AI - Al Jazeera English
ChatGPT and other generative artificial intelligence tools are rising in popularity. If you have ever used these tools, you might have realised that you are revealing your thoughts (and possibly emotions) through your questions and interactions with the AI platforms. You can therefore imagine the huge amount of data these AI tools are gathering and the patterns that they are able to extract from the way we think.
The impact of these business practices is crystal clear: a new AI economy is emerging through collecting, codifying, and monetising the patterns derived from our thoughts and feelings. Intrusions into our intimacy and cognition will be much greater than with existing social media and tech platforms.
We, therefore, risk becoming victims of “knowledge slavery” where corporate and/or government AI monopolies control our access to our knowledge.
Let us not permit this. We have “owned” our thinking patterns since time immemorial, we should also own those derived automatically via AI. And we can do it!
One way to ensure that we remain in control is through the development of bottom-up AI, which is both technically possible and ethically desirable. Bottom-up AI can emerge through an open source approach, with a focus on high-quality data.
Open source approach: The technical basis for bottom-up AI
Bottom-up AI challenges the dominant view that powerful AI platforms can be developed only by using big data, as is the case with ChatGPT, Bard, and other large language models (LLMs).
According to a leaked document from Google titled “We have no Moat, and Neither Does OpenAI”, open source AI could outcompete giant models such as ChatGPT.
As a matter of fact, it is already happening. Open source platforms Vicuna, Alpaca, and LLama are getting closer in quality to ChatGPT and Bard, the leading proprietary AI platforms, as illustrated below.
Open source solutions are also more cost-effective. According to Google’sleaked document: “They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”
Open source solutions are also faster, more modular, and greener in the sense that they demand less energy for data processing.
High-quality data: The fuel for bottom-up AI
As algorithms for bottom-up AI become increasingly available, the focus is shifting to ensuring higher quality of data. Currently, the algorithms are fine-tuned mainly manually through data labelling performed mainly in low-cost English-speaking countries such as India and Kenya. For example, ChatGPT datasets are annotated in Kenya. This practice is not sustainable as it raises many questions related to labour law and data protection. It also cannot provide in-depth expertise, which is critical for the development of new AI systems.
At Diplo, the organisation I lead, we have been successfully experimenting with an approach that integrates data labelling into our daily operations, from research to training and management. Analogous to yellow markers and post-its, we annotate text digitally as we run courses, conduct research or develop projects. Through interactions around text, we gradually build bottom-up AI.
The main barrier in this bottom-up process is not technology but cognitive habits that often favour control over knowledge and information sharing. Based on our experience at Diplo, by sharing thoughts and opinions on the same texts and issues, we gradually increase cognitive proximity not only among us colleagues as humans, but also between us humans and AI algorithms. This way, while building bottom-up AI, we have also nurtured a new type of organisation which is not only accommodating the use of AI but also changing the way we work together.
How will bottom-up AI affect AI governance?
ChatGPT triggered major governance fears, including a call by Elon Musk, Yuval Harari and thousands of leading scientists to pause AI development on account of big AI models triggering major risks for society, including high concentrations of market, cognitive, and societal power. Most of these fears and concerns could be addressed by bottom-up AI, which returns AI to citizens and communities.
By fostering bottom-up AI,many governance problems triggered by ChatGPT might be resolved through the mere prevention of data and knowledge monopolies. We will b...
31
views
Here's the Next AI Stock Most Likely to Join the $1 Trillion Club (Besides Nvidia) - The Motley...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Here's the Next AI Stock Most Likely to Join the $1 Trillion Club (Besides Nvidia) - The Motley Fool
Four stocks that trade on U.S. stock exchanges currently have market caps of more than $1 trillion. And all of them have significant artificial intelligence (AI) development efforts.
But another AI leader is knocking at the door. Nvidia'smarket cap came within a hair of reaching $1 trillion in recent weeks and currently stands at close to $980 billion.
It won't take much to push Nvidia over the 12-figure milestone. And there's another contender that could follow in its footsteps. Here's the next AI stock most likely to join the $1 trillion club, besides Nvidia.
Aiming to reclaim its membership Meta Platforms' (META 0.14%) market cap topped $1 trillion in 2021. It didn't stay at that level for long, though. By the end of 2022, the company formerly known as Facebook had lost nearly 75% of its peak value. Investors soured on Meta's focus on the metaverse. More importantly, they became disenchanted by its sinking profits. Some seemed ready to relegate the company to the ash heap of tech history.
But along the way, others began to notice that Meta's valuation was starting to look really attractive. They rightly pointed out that the company's social media platforms were still used by close to three billion people across the world every day.
Meta gave investors more to consider by beating earnings estimates for the first time in quite a while with its 2023 first-quarter results.All of this breathed new life into the floundering stock.
The AI fervor ignited by OpenAI's launch of ChatGPT helped as well. Meta's share price has skyrocketed close to 150% so far in 2023, bringing its market cap to around $680 billion.
Meta's path to $1 trillion
How can Meta claw its way back to a market cap of at least $1 trillion? The simple answer is to increase its earnings. The more complicated answer is to convince investors that its future earnings potential is greater than previously thought.
I think Meta will be able to grow its earnings significantly. The digital advertising market should rebound, and the full impact of the company's restructuring and layoffs hasn't been felt completely yet. Meta's AI efforts are dramatically improving monetization on Facebook and Instagram Reels.
AI could also, perhaps, be the best way for Meta to persuade investors about its long-term growth prospects. The company is working on incorporating generative AI chat into Messenger and WhatsApp and is developing AI tools to help create videos for ads and posts on Facebook and Instagram. Meta also hopes to deploy AI agents to help businesses with customer support.
These AI efforts could help Meta achieve its metaverse vision as well. CEO Mark Zuckerberg insists that the company isn't scaling back its plans for the metaverse. He thinks AI could be used to assist with creating avatars and virtual worlds.
Two potential obstacles
Meta still might not be the next AI stock to join the $1 trillion club, though. Two potential obstacles stand out, in my view.
First, Tesla (TSLA 4.06%) could join the club before Meta does. The electric vehicle maker is viewed by some, especially Ark Invest CEO Cathie Wood, as one of the top AI stocks on the market. Tesla's market cap of around $780 billion is also closer to the $1 trillion mark than Meta's.
However, Wall Street appears to be siding more with Team Meta than Team Tesla. I suspect valuation is an important factor. Meta's price-to-earnings-to-growth (PEG) ratio is 0.91 compared to Tesla's PEG multiple of nearly 2.5.
Second, Meta could be outplayed by current $1 trillion club leader Apple (AAPL 0.22%). If Apple's Vision Pro mixed-reality headset is where investors believe technology is headed, they could lose interest in Meta's augmented reality (AR) and virtual reality (VR) dreams.
Zuckerberg wasn't overly impressed with Vision Pro. He reportedly told Meta employees that Apple's headset "could be the vision of the future of computing, but like, it's not the one that I want." However, what Zuckerberg wants just might not be what he gets.
Still, Apple's hefty price tag for its mixed-reality headset could limit adoption. And even if it doesn't, the competition could ultimately help Meta by boosting consumers' interests in AR and VR. Randi Zuckerberg, a former director of market development and spokeswoman for Faceboo...
51
views