How new AI tools like ChatGPT can transform human productivity in the enterprise - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How new AI tools like ChatGPT can transform human productivity in the enterprise - VentureBeat
July 2, 2023 10:10 AM Image Credit: Created with Midjourney Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More Artificial intelligence (AI) has emerged as a revolutionary force, reshaping industries and unlocking unprecedented opportunities for business growth. In today’s fiercely competitive landscape, enterprise decision-makers must recognize and harness the power of AI to enhance human productivity and achieve sustainable success. By effectively using AI technologies, businesses can streamline operations, optimize workflows and empower their workforce with actionable insights. This article dives deeper into how business leaders can use the transformative potential of AI to revolutionize human productivity, providing insightful examples and statistics that demonstrate the technology’s profound impact. Leveraging generative AI and ChatGPT AI tools like generative AI models and conversational agents such as ChatGPT have expanded the benefits of AI in transforming human productivity. For example, a case study showed that implementing generative AI for content creation resulted in a40% reductionin time spent on writing product descriptions, allowing employees to focus on strategic tasks. Additionally, a recent survey found that businesses utilizing conversational agents like ChatGPT experienced a 30% decrease incustomer support responsetimes, leading to improved customer satisfaction. These evolving AI tools enable businesses to optimize workflows, enhance collaboration, and deliver unique customer experiences, unlocking untapped growth potential in the digital landscape. Automating repetitive tasks One of the most profound advantages of AI lies in its ability to automate mundane and time-consuming tasks. By delegating repetitive activities to AI-powered systems, employees can redirect their focus towards high-value, strategic work. For instance, employing AI-based chatbots for customer support significantly reduces response times, enhances customer satisfaction, and liberates human agents to handle more complex queries. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now According to astudy by Gartner, businesses can achieve a 25% increase in overall business process efficiency by embracing AI-driven automation. Moreover, the implementation of AI-driven automation can lead to an estimated70% reductionin costs associated with manual data entry and data processing tasks. Intelligent data analysis Data serves as the lifeblood of modern enterprises, yet extracting meaningful insights from vast amounts of data can be a daunting task. Here, AI technologies such as machine learning and natural language processing come into play, enabling the analysis of data at scale, uncovering valuable patterns and providing actionable insights. For example, AI-powered analytics platforms can process customer data to identify trends, preferences and purchasing patterns, allowing businesses to deliver personalized experiences. McKinsey reports that AI-driven data analysis can improve productivityby up to 40% in certain industries. Furthermore, a study conducted by Forrester Consulting found that organizations leveraging AI for data analysisexperienced a 15% reductionin decision-making time, enabling them to respond faster to market changes and gain a competitive advantage. Augmenting decision-making AI has the potential to augment human decision-making by offering real-time, data-driven recommendations. Business leaders can use AI-powered predictive analytics models to forecast market trends, optimize inventory management and enhance supply chain efficiency. By incorporating AI into their decision-making processes, organizations can mitigate risks, make well-informed choices and drive better business outcomes. A survey conducted by Deloitte revealedthat 82% of early AI adopters experienced a positive impact on their decision-making processes. Moreover, a report byAccenture statesthat AI can improve decision-making accuracy by 75%, resulting in better resource allocation and higher profita...
23
views
The Best AI Stock to Own Could Be Sitting in Your Pocket - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The Best AI Stock to Own Could Be Sitting in Your Pocket - The Motley Fool
If you're hunting for artificial intelligence (AI) stocks to buy, you're not alone.
Excitement over AI has surged since OpenAI launched ChatGPT late last year, and it's not just investors who see an opportunity in the new technology. Companies are talking up their AI initiatives more than ever before, a sign that the boom is more than just hype.
Meanwhile, shares of Nvidia (NVDA 3.62%), the semiconductor champ, skyrocketed after management gave much better guidance than expected for the second quarter. This indicates that demand for its AI chips is surging as businesses large and small are looking to leverage new generative AI technologies.
However, many of the well-known AI stocks have already seen their valuations spike as investors have piled into them, even as most have barely shown positive results from the AI boom.
Nvidia, for example, now trades at a price-to-earnings ratio of around 200 and has a market cap of $1 trillion. C3.ai, another AI stock that soared this year, has a price-to-sales ratio of 15, even though revenue growth was flat.
If you're looking for a reasonably priced AI stock that could be a big winner, the answer could be more obvious than you think. Image source: Getty Images. The consumer tech king
Unlike its big tech peers -- including Microsoft,Alphabet,Amazon, andMeta Platforms -- Apple (AAPL 2.31%) has spent little time talking up its ambitions in artificial intelligence, and its management has generally avoided using the buzzy phrase on earnings calls and other presentations.
However, Apple could be one of the biggest winners from the AI boom since it has a suite of devices ready to serve as vehicles for the new technology. In other words, unlike many companies trying to leverage the power of AI, Apple has a business model already built in to capitalize on it: selling devices and the services that go with them.
And right now, there's no device more capable of capitalizing on the AI boom than the Vision Pro, the mixed reality headset that Apple unveiled at its Worldwide Developers Conference in early June and that retails for $3,500.
The Vision Pro uses machine learning to do things like render a full image of your face so you can use FaceTime even though you wear the device over your eyes and it has no full frontal cameras. To do so, the Vision Pro uses its front sensors and a neural network to create what Apple calls "your digital persona." AI is also what allows the Vision Pro to function without using the kind of handheld haptics that the Meta Quest requires, one example of how the Vision Pro is pushing the limits of technology, including AI.
That also forms the backbone of other tools like predictive text, Siri, and new applications, including Journal, which can personalize suggestions taken from your iPhone to help you write. Why Apple is the easiest AI stock to own
It's unclear if the Vision Pro will be a success, and the new device doesn't go on sale until early next year. But its introduction puts Apple on the pole position to control the next computing platform, meaning it's also the company most likely to own the device that serves as the vehicle for AI.
If the technology is as powerful as AI bulls believe it will be, Apple's consumer-tech ecosystem will grow even stronger.
Apple has tons of brand equity in consumer hardware, competitive advantages through its installed base of 2 billion complementary devices, and now the device that could be the next generation of tech hardware.
If you're putting together an AI stock portfolio, Apple is a no-brainer. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Jeremy Bowman has positions in Amazon.com and Meta Platforms. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Apple, Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution,...
106
views
1
comment
Here's Why Google DeepMind's Gemini Algorithm Could Be Next-Level AI - Singularity Hub
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Here's Why Google DeepMind's Gemini Algorithm Could Be Next-Level AI - Singularity Hub
Recent progress in AI has been startling. Barely a week’s gone by without a new algorithm, application, or implication making headlines. But OpenAI, the source of much of the hype, only recently completed their flagship algorithm, GPT-4, and according to OpenAI CEO Sam Altman, its successor, GPT-5, hasn’t begun training yet.
It’s possible the tempo will slow down in coming months, but don’t bet on it. A new AI model as capable as GPT-4, or more so, may drop sooner than later.
This week, in an interview withWill Knight, Google DeepMind CEO Demis Hassabis said their next big model, Gemini, is currently in development, “a process that will take a number of months.” Hassabis said Gemini will be a mashup drawing on AI’s greatest hits, most notably DeepMind’s AlphaGo, which employed reinforcement learning to topple a champion at Go in 2016, years before experts expected the feat.
“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis told Wired. “We also have some new innovations that are going to be pretty interesting.” All told, the new algorithm should be better at planning and problem-solving, he said. The Era of AI Fusion Many recent gains in AI have been thanks to ever-bigger algorithms consuming more and more data. As engineers increased the number of internal connections—or parameters—and began to train them on internet-scale data sets, model quality and capability increased like clockwork. As long as a team had the cash to buy chips and access to data, progress was nearly automatic because the structure of the algorithms, called transformers, didn’t have to change much.
Then in April, Altman said the age of big AI models was over. Training costs and computing power had skyrocketed, while gains from scaling had leveled off. “We’ll make them better in other ways,” he said, but didn’t elaborate on what those other ways would be.
GPT-4, and now Gemini, offer clues.
Last month, at Google’s I/O developer conference, CEO Sundar Pichai announced that work on Gemini was underway. He said the company was building it “from the ground up” to be multimodal—that is, trained on and able to fuse multiple types of data, like images and text—and designed for API integrations (think plugins). Now add in reinforcement learning and perhaps, as Knight speculates, other DeepMind specialties in robotics and neuroscience, and the next step in AI is beginning to look a bit like a high-tech quilt.
But Gemini won’t be the first multimodal algorithm. Nor will it be the first to use reinforcement learning or support plugins. OpenAI has integrated all of these into GPT-4 with impressive effect.
If Gemini goes that far, and no further, it may match GPT-4. What’s interesting is who’s working on the algorithm. Earlier this year, DeepMind joined forces with Google Brain. The latter invented the first transformers in 2017; the former designed AlphaGo and its successors. Mixing DeepMind’s reinforcement learning expertise into large language models may yield new abilities.
In addition, Gemini may set a high-water mark in AI without a leap in size.
GPT-4 is believed to be around a trillion parameters, and according to recent rumors, it might be a “mixture-of-experts” model made up of eight smaller models, each a fine-tuned specialist roughly the size of GPT-3. Neither the size nor architecture has been confirmed by OpenAI, who, for the first time, did not release specs on its latest model.
Similarly, DeepMind has shown interest in making smaller models that punch above their weight class (Chinchilla), and Google has experimented with mixture-of-experts (GLaM).
Gemini may be a bit bigger or smaller than GPT-4, but likely not by much.
Still, we may never learn exactly what makes Gemini tick, as increasingly competitive companies keep the details of their models under wraps. To that end, testing advanced models for ability and controllability as they’re built will become more important, work that Hassabis suggested is also critical for safety. He also said Google might open models like Gemini to outside researchers for evaluation.
“I would love to see academia have early access to these frontier models,” he said.
Whether Gemini matches or exceeds...
108
views
How businesses can break through the ChatGPT hype with 'workable AI' - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How businesses can break through the ChatGPT hype with 'workable AI' - VentureBeat
July 1, 2023 9:10 AM DDM 7/1/23 Image Credit: Created with Midjourney Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More New products like ChatGPT have captivated the public, but what will the actual money-making applications be? Will they offer sporadic business success stories lost in a sea of noise, or are we at the start of a true paradigm shift? What will it take to develop AI systems that are actually workable? To chart AI’s future, we can draw valuable lessons from the preceding step-change advance in technology: the Big Data era. 2003–2020: The Big Data Era The rapid adoption and commercialization of the internet in the late 1990s and early 2000s built and lost fortunes, laid the foundations of corporate empires and fueled exponential growth in web traffic. This traffic generated logs, which turned out to be an immensely useful record of online actions. We quickly learned that logs help us understand why software breaks and which combination of behaviors leads to desirable actions, like purchasing a product. As log files grew exponentially with the rise of the internet, most of us sensed we were onto something enormously valuable, and the hype machine turned up to 11. But it remained to be seen whether we could actually analyze that data and turn it into sustainable value, especially when the data was spread across many different ecosystems. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now Google’s big data success story is worth revisiting as a symbol of how data turned it into a trillion-dollar company that transformed the market forever. Google’s search results were consistently excellent and built trust, but the company couldn’t have kept providing search at scale — or all the additional products we rely on Google for today — until Adwords enabled monetization. Now, we all expect to find exactly what we need in seconds, as well as perfect turn-by-turn directions, collaborative documents and cloud-based storage. Countless fortunes have been built on Google’s ability to turn data into compelling products, and many other titans, from a rebooted IBM to the new goliath of Snowflake, have built successful empires by helping organizations capture, manage and optimize data. What was just confusing babble at first ultimately delivered tremendous financial returns. It’s this very path that AI must follow. 2017–2034: The AI Era Internet users have produced massive volumes of text written in natural language, like English or Chinese, available as websites, PDFs, blogs and more. Thanks to big data, storing and analyzing this text is easy — enabling researchers to develop software that can read all that text and teach itself to write. Fast-forward to ChatGPT arriving in late 2022 and parents calling their kids asking if the machines had finally come alive. It is a watershed moment in the field of AI, in the history of technology, and maybe in the history of humanity. Today’s AI hype levels are right where we were with big data. The key question the industry must answer is: How can AI deliver the sustainable business outcomes essential to bring this step-change forward for good? Workable AI: Let’s put AI to work To find viable, valuable long-term applications, AI platforms must embrace three essential elements. The generative AI models themselves The interfaces and business applications that will allow users to interact with the models, which could be a standalone product or a generative AI-augmented back office process A system to ensure trust in the models, including the ability to continually and cost-effectively monitor a model’s performance and to teach the model so that it may improve its responses Just as Google united these elements to create workable big data, the AI success stories must do the same to create what I call Workable AI. Let’s look at each of these elements and where we are today: Generative AI models Generative AI is unique in its wildness, bringing challenges of unexpected behavior and requiring continual teaching to improve. We can’t fix bugs as we would with tr...
22
views
Will AI Take My Job? It's a Hot Topic for Investors This Summer - Bloomberg
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Will AI Take My Job? It's a Hot Topic for Investors This Summer - Bloomberg
Need help? Contact us We've detected unusual activity from your computer network To continue, please click the box below to let us know you're not a robot. Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
1
view
Elon Musk puts a reading paywall on Twitter - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Elon Musk puts a reading paywall on Twitter - The Verge
Elon Musk continues to blame Twitter’s new limitations on AI companies scraping “vast amounts of data” as he announced new “temporary” limits on how many posts people can read. Now unverified accounts will only be able to see 600 posts per day, and for “new” unverified accounts, just 300 in a day. The limits for verified accounts (presumably whether they’re bought as a part of the Twitter Blue subscription, granted through an organization, or verification Elon forced on people like Stephen King, LeBron James, and anyone else with more than a million followers) still allow reading only a maximum of 6,000 posts per day. Shortly after that, Musk tweeted that the rate limits would “soon” increase to 8,000 tweets for verified users, 800 for unverified, and 400 for new unverified accounts. The limitations arrived one day after Twitter suddenly started blocking access for anyone who isn’t logged in, which Musk claimed was necessary because “Several hundred organizations (maybe more) were scraping Twitter data extremely aggressively, to the point where it was affecting the real user experience.” The change is just one of several ways Musk has tried to monetize Twitter in the last several months. The company announced a three-tier API change in March that would begin charging for the use of its API, just three months after finally rolling out the revamped $8 per month Twitter Blue pay-for-verification scheme. Musk has also replaced himself with a new CEO, Linda Yaccarino. The former ad exec from NBC Universal has been hired to restore relationships with advertisers that had slashed their spending on Twitter. As a private company, we know less about Twitter’s financial situation than we did before Musk’s purchase, but the hiring of Yaccarino reflected how important advertising revenue is to the business. Limiting access to the site cuts directly against the goal of creating opportunities to see the ad spots companies are paying for, but Musk’s monopoly brain view of Twitter may be obscuring that. Musk is blaming companies trying to ingest data for artificial intelligence training the large language models (LLMs) like the ones behind ChatGPT, Microsoft Bing, and Google Bard. But he didn’t mention his decision to lay off more than half of Twitter’s staff since taking over the company last fall, including people critical to maintaining its infrastructure. The haphazard layoffs meant the company even had to rehire some engineers who had been let go, and people have repeatedly warned that firing so many people would affect Twitter’s stability. A significant outage in March was the result of a change by a single engineer. Platformer reported Twitter’s Google Cloud bill went unpaid for months until very recently, reflecting a “Deep Cuts Plan” Reuters had previously reported that sought to cut millions of dollars per day in spending on infrastructure costs. Last November, an unnamed Twitter engineer interviewed by MIT Technology Review said that after the staff reductions, “Things will be broken more often. Things will be broken for longer periods of time. Things will be broken in more severe ways... They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just give up.” In the same article, site reliability engineer Ben Kreuger said, “I would expect to start seeing significant public-facing problems with the technology within six months.” It has been seven. Correction July 1st, 2023 4:55PM ET: A previous version of this story mentioned a response to Mr. Beast as being from Elon Musk himself. In fact, it was from an Elon Musk parody account. It has been removed. We regret the error.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
21
views
Charley Walters: Artificial intelligence coming soon to a baseball team near you - St. Paul Pio...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Charley Walters: Artificial intelligence coming soon to a baseball team near you - St. Paul Pioneer Press
Slide over analytics. Make room for artificial intelligence (AI). It’s for real. It’s coming.
How could it work for the Minnesota Twins?
“It’s funny you asked that — it’s a great question and it’s something I think the whole world is unpacking at the same time,” Twins baseball chief Derek Falvey said the other day. “The speeding up of the development of AI in the last six months of our lives is something we’re all trying to unpack,” he continued. “I don’t have an answer today for it. We’re thinking about it — how does it help us, where can we look at it.
“There’s areas where it naturally fits, when you’re building systems or tools to help you assess player talent or look at stats. We’re already building models that project, ‘What’s this player going to do in the future?’ We’ve been building models forever, where do players play on the field, positioning.”
That’s analytics. “You look at where guys have weaknesses in their zones, where the pitches go, where they swing and miss,” Falvey said. “So it helps you with team planning and those things. That’s true in every sport.”
But, Falvey wonders, “How does AI amplify that? None of us know yet. Are there ways that AI can help you look at that data in a way that we haven’t before? I don’t have an answer to that because we don’t have the tools to do it.” For now.
“But there’s probably something coming based on the rapid escalation of the use of AI that will show up in sports at some point,” Falvey said. “Everyone’s thinking about it. We’re studying with human beings right now and trying to understand what the data tells us. Maybe there’s ways to actually study it with us even knowing what it’s studying. It’s fascinating.”
How might AI transfer to the field?
“Think about it in this context — a guy hits a thousand baseballs over the course of a few seasons,” Falvey said. “All that data exists as to where he hit them, how hard he hit them and where they go directionally. Right now, we plot that on a sheet that shows you a map. Same in basketball — it’s a shot chart where guys make shots from, where they don’t.
“So we have to study that data now and decide where’s our right fielder’s going to play; here’s where our left fielder’s going to play. That’s what we do. There could be a more machine-oriented version of studying that to say, ‘Actually, for this pitcher, for this hitter, the right balance is actually here because of the way he throws his fastball, or because of the way he throws his breaking ball.
“There’s ways that we can’t even comprehend that’s so multi-dimensional. That could be how (AI) works in the future.”
— The 3M Open at TPC Twin Cities in Blaine is contracted through 2026. Hollis Cavner, who runs the PGA Tour tournament, said 3M’s recent $10.3 billion lawsuit settlements over water systems contamination won’t affect the tournament’s future.
“Doesn’t affect us at all — that’s a totally separate issue with corporate,” he said.
— Mike Antonovich, the former Greenway High, Gopher, Minnesota Fighting Saint and North Star and mayor of hometown Coleraine, Minn., is recovering after a heart attack last week.
“I’m doing good — I’m very fortunate,” Antonovich said. “I always told my buddies I was invincible. Now I’m not.”
Antonovich, 71, who is among Minnesota’s best-ever high school players, started playing senior hockey a couple times a week in Coleraine for exercise and fun during the COVID outbreak a few years ago.
“I didn’t know what the symptoms were,” he said. “When I played, I had a cough in my chest — I thought it was just because of the cold. Then when I worked out, I could feel something in my chest. It wasn’t sharp or anything. It would go away.
“But when it (heart attack) happened, I was working out in the basement. Nobody was home. When I came up the stairs, there was a little more pain in there. I tried to walk it off, and that didn’t work. Then my wife showed up and she knew what was going on. So she got some aspirin in me.”
Antonovich, who is an amateur scout for Minnesota for the Columbus Blue Jackets, was flown by helicopter from Grand Rapids to St. Luke’s Hospital in Duluth, where he received a stent.
“Got there in a hurry — they basically they saved my life,” he said.
Antonovich turns 72 in October.
“If somebody told me I’d still be touc...
91
views
Opinion The True Threat of Artificial Intelligence - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Opinion The True Threat of Artificial Intelligence - The New York Times
In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories warned. This came on the heels of another high-profile letter, signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems. Meanwhile, the Biden administration has urged responsible A.I. innovation, stating that “in order to seize the opportunities” it offers, we “must first manage its risks.” In Congress, Senator Chuck Schumer called for “first of their kind” listening sessions on the potential and risks of A.I., a crash course of sorts from industry executives, academics, civil rights activists and other stakeholders. The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts. A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting — some say impossible — task. But the benefits appear truly tantalizing. Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry — without ever being programmed to do these things. Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run. Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.” This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral. They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence. But this ideology —call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence. Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market. Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism. Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation. Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else. It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future. Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism,...
88
views
Inside the race to build an 'operating system' for generative AI - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Inside the race to build an 'operating system' for generative AI - VentureBeat
June 30, 2023 4:00 AM Credit: VentureBeat made with Midjourney Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More Generative AI, the technology that can auto-generate anything from text, to images, to full application code, is reshaping the business world. It promises to unlock new sources of value and innovation, potentially adding $4.4 trillion to the global economy, according to a recent report by McKinsey. But for many enterprises, the journey to harness generative AI is just beginning. They face daunting challenges in transforming their processes, systems and cultures to embrace this new paradigm. And they need to act fast, before their competitors gain an edge. One of the biggest hurdles is how to orchestrate the complex interactions between generative AI applications and other enterprise assets. These applications, powered by large language models (LLMs), are capable not only of generating content and responses, but of making autonomous decisions that affect the entire organization. They need a new kind of infrastructure that can support their intelligence and autonomy. Ashok Srivastava, chief data officer of Intuit, a company that has been using LLMs for years in the accounting and tax industries, told VentureBeat in an extensive interview that this infrastructure could be likened to an operating system for generative AI: “Think of a real operating system, like MacOS or Windows,” he said, referring to assistant, management and monitoring capabilities. Similarly, LLMs need a way to coordinate their actions and access the resources they need. “I think this is a revolutionary idea,” Srivastava said. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now The operating-system analogy helps to illustrate the magnitude of the change that generative AI is bringing to enterprises. It is not just about adding a new layer of software tools and frameworks on top of existing systems. It is also about giving the system the authority and agency to run its own process, for example deciding which LLM to use in real time to answer a user’s question, and when to hand off the conversation to a human expert. In other words, an AI managing an AI, according to Intuit’s Srivastava. Finally, it’s about allowing developers to leverage LLMs to rapidly build generative AI applications. This is similar to the way operating systems revolutionized computing by abstracting away the low-level details and enabling users to perform complex tasks with ease. Enterprises need to do the same for generative AI app development. Microsoft CEO Satya Nadella recently compared this transition to the shift from steam engines to electric power. “You couldn’t just put the electric motor where the steam engine was and leave everything else the same, you had to rewire the entire factory,” he told Wired. What does it take to build an operating system for generative AI? According to Intuit’s Srivastava, there are four main layers that enterprises need to consider. First, there is the data layer, which ensures that the company has a unified and accessible data system. This includes having a knowledge base that contains all the relevant information about the company’s domain, such as — for Intuit — tax code and accounting rules. It also includes having a data governance process that protects customer privacy and complies with regulations. Second, there is the development layer, which provides a consistent and standardized way for employees to create and deploy generative AI applications. Intuit calls this GenStudio, a platform that offers templates, frameworks, models and libraries for LLM app development. It also includes tools for prompt design and testing of LLMs, as well as safeguards and governance rules to mitigate potential risks. The goal is to streamline and standardize the development process, and to enable faster and easier scaling. Third, there is the runtime layer, which enables LLMs to learn and improve autonomously, to optimize their performance and cost, and to leverage enterprise data. This is the most exciting and innovati...
35
views
Humane's first gadget is named the “Humane Ai Pin,” and it's coming this year - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Humane's first gadget is named the “Humane Ai Pin,” and it's coming this year - The Verge
Humane, the buzzy company started by former Apple employees that has been making big promises about an AI-first and post-smartphone future, announced today that its first gadget will be called the Humane Ai Pin. It’ll be powered by “an advanced Snapdragon platform” in partnership with Qualcomm, and it’s coming later this year. That’s really all we know so far. Humane continues to be mysterious about how the Ai Pin works, what exactly it will do, and even what it looks like. (Most mysterious of all: why in the world is “AI” not capitalized? What is “Ai?” Am I supposed to pronounce it like “eye?” I am confident this will infuriate The Verge’s copy desk and me in equal measure for years to come.) The last we saw of this gadget was at the TED conference in April, where co-founder Imran Chaudhri demoed a device — presumably the Pin —onstage. He used it as a voice assistant; made phone calls; received an automated summary of his day; took a picture to get nutrition info on a chocolate bar; and projected a small green screen into his hand. We came out of that demo with far more questions than answers because something about the demo just seemed off. How did the device know to translate Chaudhri’s words from English to French, for instance, when he never asked for a translation? The name announcement does potentially answer one open question about how exactly you’re supposed to wear this thing. In Chaudhri’s TED demo, he appeared to have the device sticking out of a breast pocket —it looked more like a deck of cards than a pin, but presumably, it was a prototype. Calling it a “pin” implies that you might, you know, pin it to yourself in some way rather than needing pockets all the time. Humane also calls it a “clothing-based wearable device” in its press release announcing the name, which suggests something similar. Most of our questions remain unanswered, though. Other than the name, the only revealing thing about Humane’s release today is that it uses “AI” 22 times and that the Pin “uses a range of sensors that enable contextual and ambient compute interactions.” Which, sure. Humane’s slow drip of information will likely continue for the next few months, so hopefully we’ll start to learn how this works, how you’re supposed to use it to do things, how it connects to the cloud, why a projector is better than a phone screen, and what it’s all going to cost. Still, I’m unabashedly intrigued by the Ai Pin. It’s a huge swing at a new form factor and potentially a whole new idea about how we’re supposed to interact with technology. In a world increasingly full of screens —in our hands, on our bodies, even on our faces —Humane’s going the other way. And it’s going to be fascinating to watch.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
70
views
AI's Teachable Moment: How ChatGPT Is Transforming the Classroom - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI's Teachable Moment: How ChatGPT Is Transforming the Classroom - CNET
My 12-year-old nephew's bedroom is a shrine to monsters. Intricate Lego dragons loom ominously atop bookshelves jam-packed with reference works for the handmade creatures he painstakingly crafts out of clay. Then there are the paintings. Dozens of them. Plastered over the walls. Giant squid, kaiju, dinosaurs, hulking tentacled beasts of his own invention. His parents have gone to great lengths to nurture this burgeoning creative spirit. They make stop-motion movies as a family. His dad is teaching him 3D art on the computer. Together they're learning to use Unity, the design tool behind video games like Hollow Knight, Cuphead and Pokemon Go. But lately his dad's been second-guessing those decisions. The reason? AI. Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He's unsure. But instead of burying his head in the sand, he's doing what any tech-savvy parent would do: He's teaching his son how to use AI. In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams withMidjourney, a generative AI tool that uses language prompts to produce images. The whole family is wrestling with the impacts of AI. His mother, my sister-in-law, is a high school science teacher. She's tackling even bigger issues. She's in the process of teaching an entire generation of children to interact with technology that could transform the workplace over the coming years. The questions are many. How do we deal with the immediate issues of cheating and plagiarism? How do educators prepare children for a future working alongside AI? And how do teachers, institutions and governments find room to plan for the future? Reading, writing and AI ChatGPT, an artificial intelligence chatbot developed by OpenAI, has been immediately transformative. And terrifying. Trained on almost incalculable swaths of existing text, ChatGPT takes prompts from users and generates surprisingly sophisticated answers. If, for instance, you ask for a chocolate cake recipe, it provides all the steps. Using ChatGPT can feel like conversing online with a human being who has access to endless repositories of knowledge. But ChatGPT is far from infallible. The AI tool frequently "hallucinates" wrong answersin response to prompts, and – more troubling – it's been known to generate misinformation. Regardless, the raw numbers speak volumes: It took ChatGPT, which launched in late November, five days to hit 1 million users. It took Facebook 10 months to hit the same number. Twitter needed two years. According to thedata, the service regularly sees 1 billion monthly visitors. Reactions to this technology have been broad and far-reaching. Some people see ChatGPT in apocalyptic terms, as a harbinger of humanity's inevitable doom. Others see AI as a utopian technology with the potential to dramatically enhance productivity and transform work as we know it. The US Department of Education has taken notice. In May, it issued a report on AI and the future of teaching, noting that, among other things, AI can support educators, enable new forms of interaction and help address variability in student learning. It also acknowledged worries about student surveillance and the potential for human teachers to be replaced. Globally, there's been a huge response. Stanford's 2023 AI Index noted that as of 2021, 11 countries, including Belgium, China and South Korea, had officially endorsed and implemented an AI curriculum. An Education Department blog post, published in April, said that within five years, AI will "change the capabilities of teaching and learning tools." For many teachers, AI is already a source of anxiety. "Most teachers, if they're aware of ChatGPT, are a bit freaked out by it," says Dave Hughes, a high school physics teacher in Sydney, Australia. Hughes, who keeps up with most cutting-edge technology, was among the first of his peers to start experimenting with AI and language learning models. He's b...
47
views
AMD's AI chips could match Nvidia's offerings, software firm says - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AMD's AI chips could match Nvidia's offerings, software firm says - Reuters
[1/2]People stand at the AMD booth during the Mobile World Congress in Shanghai, China June 28, 2023. REUTERS/Nicoco Chan/File Photo June 30 (Reuters) - Artificial intelligence chips from Advanced Micro Devices (AMD.O) are about 80% as fast as those from Nvidia Corp (NVDA.O), with a future path to matching their performance, according a Friday report by an AI software firm. Nvidia dominates the market for the powerful chips that are used to create ChatGPT and other AI services that have swept through the technology industry in recent months. The popularity of those services has pushed Nvidia's value past $1 trillion and led to a shortage of its chips that the Nvidia says it is working to resolve. But in the meantime, tech companies are looking for alternatives, with hopes that AMD will be a strong challenger. That prompted MosaicML, an AI startup acquired for $1.3 billion earlier this week, to conduct a test comparing between AI chips from AMD and Nvidia. MosaicML evaluated the AMD MI250 and the Nvidia A100, both of which are one generation behind each company's flagship chips but are still in high demand. MosaicML found AMD's chip could get 80% of the performance of Nvidia's chip, thanks largely to a new version of AMD software released late last year and a new version of open-source software backed by Meta Platforms (META.O) called PyTorch that was released in March. Hanlin Tang, chief technology officer of MosaicML, said that the company believes that further software updates from AMD that are in the works should help its MI250 chip match the performance of Nvidia's A100. "For most (machine learning) chip companies out there, the software is the Achilles heel of it," Tang said, adding that AMD had not paid MosaicML to conduct its research. "Where AMD has done really well is on the software side." Tang said that MosaicML used its tools, the PyTorch and AMD software to train a large language model without having to make any changes to its code base. If developers can find AMD's chips at the right price, "you can already switch to these today they're essentially interchangeable" with Nvidia chips, Tang said. MosaicML sells software that makes it easier for companies to create AI systems inside their own data centers rather than paying for access to those systems from providers like ChatGPT creator OpenAI. The company said it conducted the research to illustrate that its customers have chip options beyond Nvidia. "Mosaic’s results reinforce our strategy of supporting an open and easy to implement software ecosystem for AI training and inference on AMD hardware," AMD said in a statement, adding that it would continue to work with the company to tune its software. Nvidia declined to comment. Reporting by Stephen Nellis in San Francisco; Editing by Aurora Ellis Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
50
views
Inflection AI, Year-Old Startup Behind Chatbot Pi, Raises $1.3 Billion - Forbes
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Inflection AI, Year-Old Startup Behind Chatbot Pi, Raises $1.3 Billion - Forbes
Backed by Microsoft, Nvidia and billionaires Reid Hoffman, Bill Gates and Eric Schmidt, the startup led by ex-DeepMind leader Mustafa Suleyman is valued at $4 billion — and claims to have the world’s best AI hardware setup. Less than two months after the launch of their first chatbot Pi, artificial intelligence startup Inflection AI and CEO Mustafa Suleyman have raised $1.3 billion in new funding. Microsoft, Nvidia and three of tech’s most influential billionaires led the investment in the Palo Alto-based startup launched in early 2022. LinkedIn cofounder Reid Hoffman, Microsoft cofounder Bill Gates and former Google CEO Eric Schmidt all personally invested, with Nvidia the sole new investor among the group. The new funding values Inflection at $4 billion, according to a source with knowledge of the transaction. Inflection said the company and Suleyman remained majority shareholders and declined further comment. In an interview, Suleyman said that the group of mostly insiders proposed the additional investment after Inflection was “overwhelmed with offers” following the launch of Pi, its conversational chatbot launched in May. “I think people can see that it’s just the tip of the iceberg,” Suleyman told Forbes. “There’s so much further to go after [Pi] validates the core thesis, which is that conversation is the new interface.” Some details of Inflection’s new deal with Microsoft and Nvidia are, like Suleyman’s iceberg, still largely out of view. He declined to provide a breakdown of how much of the $1.3 billion raised included cash equivalents (such as computing credits) but said that a “very, very large chunk” was in dollars. “We have all the cash we need to run and operate,” he added.
Inflection also declined to comment on how much equity Microsoft and Nvidia now held in the business. But Suleyman said neither company commands ownership-like control over it, or other preferential rights. “In practice, it was a very traditional round,” he said. “There’s no IP movement, and we still are entirely independent and at liberty to do whatever we want on the commercial front, and partner with whomever we want. So there are no real restrictions,” he said. Largest AI fundraising rounds* What is clear: the round significantly deepens Inflection’s ties with Microsoft and Nvidia, two key partners in the AI race. Microsoft, also a major investor in OpenAI, is Inflection’s cloud computing partner; Nvidia, meanwhile, has been working closely with Inflection on the deployment of its flagship H100 graphics processing unit (GPU), the current gold standard for AI training and powering large language models like OpenAI’s GPT-3. Nvidia worked closely with Inflection and service provider CoreWeave to co-develop Inflection’s current H100 cluster; Inflection paused its own work for Nvidia to run a recent test that Nvidia announced this week had set records on eight tests of current AI model training benchmarks, completing a benchmark based on GPT-3 in less than 11 minutes. That test, which matched the computational power of training a model that took an estimated three to six months to develop, ran on Inflection’s 3,584 H100 GPUs already in service, Suleyman noted. But in the wake of this funding round and partnership, Inflection’s growing horsepower is about to get a turbo-charge. Nvidia and CoreWeave (which helps physically deploy the GPUs) are now in the process of helping Inflection install many thousands more. Once fully operational, Inflection’s new cluster will run 22,000 H100s.
Inflection believes that to be the largest GPU cluster for AI applications in the world, ahead of Meta’s 16,000 GPU cluster announced in May. (Just how many OpenAI is using is currently unknown; Nvidia announced last November it planned to incorporate “tens of thousands” of GPUs into Microsoft’s Azure cloud service.) Against the world’s largest clusters overall, Inflection said it estimated that it would trail only Frontier, the supercomputer maintained by the Oak Ridge National Laboratory in Tennessee.
“Microsoft has been amazing, they’re turbo-charging us, they are our anchor,” Suleyman said. “And as a result of our collaboration with Nvidia, we’ve been able to tune our cluster to get it to be the absolute best in the world. We can objectively say now that we have the...
100
views
Video: Melinda Gates is raising alarm bells about bias in AI - CNN
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Video: Melinda Gates is raising alarm bells about bias in AI - CNN
Melinda Gates on why she is 'very nervous' about AI Melinda French Gates is raising the alarm that more women must be involved in developing AI tools. She explains why on "CNN This Morning." 01:50 - Source: CNN Top business news 16 videos Melinda Gates on why she is 'very nervous' about AI 01:50 Now playing - Source: CNN The US has an empty office building problem. Here's what landlords are doing about it 03:20 Now playing - Source: CNN Why there's a new bull market despite recession fears 01:44 Now playing - Source: CNN Video: See Apple's new $3,499 mixed reality headset 00:59 Now playing - Source: CNN Gun shop owner explains decision to close his business 01:10 Now playing - Source: CNN Target facing backlash following removal of merchandise ahead of Pride Month 01:50 Now playing - Source: CNN 'Jeopardy!' fans in uproar after a single letter ends nine-day winning streak 01:03 Now playing - Source: CNN This is how much Netflix is charging to share your password 01:59 Now playing - Source: CNN See Adobe's new art tool that gives images life-like effects 00:40 Now playing - Source: CNN See fake image of an 'explosion' near the Pentagon that caused confusion 02:41 Now playing - Source: CNN Reporter says Jeffrey Epstein appeared to blackmail Bill Gates with this 'veiled threat' 03:45 Now playing - Source: CNN See what happens when you go off-roading in a $270k Lamborghini 02:21 Now playing - Source: CNN World's richest man weighs which of his 5 children will take over empire 03:48 Now playing - Source: CNN Watch video of the extra-rugged off-road 2024 Toyota Tacoma Trailhunter 00:55 Now playing - Source: CNN Business Tesla shows off updates to its robot. See how it's lagging behind the competition 01:03 Now playing - Source: CNN Business Elon Musk: 'I'll say what I want to say' even if it means losing money 00:40 Now playing - Source: CNN
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
121
views
1
comment
Decoding Repetitive Negative Thoughts: Machine Learning Predicts Rumination - Neuroscience News
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Decoding Repetitive Negative Thoughts: Machine Learning Predicts Rumination - Neuroscience News
Summary: A team of researchers developed a predictive model to recognize patterns of persistent negative thinking, or rumination, using machine learning. Researchers hypothesized that the variance of dynamic connectivity between certain brain regions, such as the dorsal medial prefrontal cortex (dmPFC), could be associated with rumination. Brain activity was measured in participants using functional Magnetic Resonance Imaging (fMRI). This innovative model may provide a valuable biomarker for depression, aiding in early detection and monitoring treatment progress. Key Facts: The research team successfully trained machine learning models to approximate rumination scores based on participants’ fMRI data. Of all the Default Mode Network regions, only the model based on the dorsal medial prefrontal cortex (dmPFC) was successful at predicting rumination scores. The model was also successful at predicting depression scores in actual patients with Major Depressive Disorder (MDD), highlighting its potential as a valuable biomarker for depression. Source: Institute for Basic Science Our minds often get trapped in repetitive thoughts, such as past mistakes, regrets, insecurities, or unresolved conflicts. This pattern of persistent negative thinking, called rumination, can have detrimental effects on mental health, leading to conditions like depression and anxiety. Recognizing rumination as a major risk factor for depression, researchers have been working to identify its neural signature and develop early detection methods. Of all the DMN regions, only the model based on the dorsal medial prefrontal cortex (dmPFC) was successful at predicting rumination scores in healthy participants. Credit: Neuroscience News A team of scientists led by KIM Jungwoo from the Center for Neuroscience Imaging Research (CNIR) within the Institute for Basic Science (IBS), in collaboration with researchers from the University of Arizona and Dartmouth College, conducted a study to develop a predictive model of rumination by using the power of machine learning. Previous research has linked a network of brain regions called the ‘default mode network’ (DMN) to rumination. However, the specific region responsible for individual differences in rumination remained unclear. The team hypothesized that the variance of dynamic connectivity, which measures the stability of interactions between brain regions over time, could be associated with rumination due to its temporal persistency. To test this, they utilized functional Magnetic Resonance Imaging (fMRI) to measure brain activity in healthy participants at rest. Using the variance of dynamic connectivity between each DMN region and brain regions across the entire brain as inputs and self-report measures of rumination scores as outputs, the researchers trained machine learning models to approximate rumination scores based on participants’ fMRI data. Of all the DMN regions, only the model based on the dorsal medial prefrontal cortex (dmPFC) was successful at predicting rumination scores in healthy participants. Additionally, the dynamic connectivity between the dmPFC and the inferior frontal gyrus, as well as the cerebellum, was found to be particularly important in predicting rumination. These findings highlight the significance of the dmPFC in rumination and depression, which is in line with previous research linking that region with high-level, reflective processes in individuals. Notably, the model was also successful at predicting depression scores in actual patients with Major Depressive Disorder (MDD). Hence the model shows promise as a valuable biomarker for depression, aiding in the identification of individuals at risk and monitoring treatment progress. By shedding light on the neural basis of rumination and its relevance to depression, this study contributes to the advancement of mental health research and may lead to more effective interventions and improved outcomes for individuals with depression. Professor WOO Choong-Wan, the lead author, stated, “The dynamic patterns of natural thought streams greatly influence our mood and emotional states. “Rumination is one of the most important thought patterns, and this study shows that the tendency to ruminate could be decoded from brain connectivity mea...
42
views
Microsoft Adds AI Shopping Tools to Bing and Edge - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Microsoft Adds AI Shopping Tools to Bing and Edge - CNET
Microsoft is adding moreartificial intelligence components into its products. This week, the company announced it's folding new AI-powered shopping tools into its Bing search engine and Edge web browser. The news follows Microsoft in February unveilingBing search powered by the large language technology behindChatGPT, calling its search engine an "AI-powered co-pilot for the web." Search results began incorporating info from OpenAI, and Bing added a chat window to help you with things like making shopping lists, summarizing PDFs, generating LinkedIn posts and getting advice on your queries. In early May Microsoft opened theBing AI chatbotto people with a Microsoft account, and later that month the company made it available to everyone else. Here's a look at the three main areas where AI will now be included. Buying guides Bing will use AI to generate buying guides tailored to your shopping-related searches. Those guides will take a term such as "college supplies" and offer categories, product suggestions, and tables comparing items. The buying guides can be accessed in Bing Chat or in the Edge sidebar. The Bing guides are available now in the US and coming soon to other markets. The Edge buying guides are starting to roll out worldwide. Review summaries Some online shoppers love to delve into their research, carefully reading multiple reviews and weeding out brands that don't pass muster. If you need an answer quickly, though, or just hate all that detailed online analysis, Microsoft now promises to let you skip it. The AI-aided Bing Chat will suggest what aspects about a purchase to consider, and then users can ask the program to briefly summarize what online users are saying. Review summaries are beginning to roll out worldwide this week. Read more:Microsoft Bing AI Chat Widgets: How to Get Them on iOS and Android Price match Once you've picked out your product, the Microsoft shopping tools home in on price. The company has partnered with various US retailers who have existing price-match policies. Even after a purchase, the feature will monitor the item's price and help you request a match if the price on your item drops. Price comparison and history features are built into the Edge browser, and online coupons and cash-back offers can be applied automatically. The Edge sidebar will also remember your shipping confirmation and tracking number info, so you don't have to hunt it up in your inbox. These features are already available in select markets. For more on artificial intelligence, check out how AI is creating a new age of hiring and learn how to sign up for Google's new AI search engine. See also: I've Been Using Google's New AI Search. Here's What I've Learned Editors' note: CNET is using an AI engine to help create some stories. For more, see this post .
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
29
views
How to Tackle AI—and Cheating—in the Classroom - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How to Tackle AI—and Cheating—in the Classroom - WIRED
This past spring, as I closed out my 18th year of teaching, I felt anxiety that I’d never before felt at the end of a school year. By the time grades are submitted and signs of summer arrive, teachers are typically able to breathe for the first time in nine months. Instead of the relaxation, joy, and accomplishment that typically awaits the end of an academic year, I was consumed with worry that this might be the last time in a nearly two-decade career that I taught a class without having to worry about AI. I get it–AI has technically been around forever, and natural language processing tools such as OpenAI’s ChatGPT are built on decades of research. Anyone who has used spellcheck or language translation apps or heard a spoken text message has used language processing tools driven by AI technology. But many of the teachers with whom I’m acquainted haven’t been too concerned about the extent to which AI might infiltrate our classrooms until now. Most teachers keep up with technology to a reasonable extent and do our best to teach our students how to use it responsibly. Many view technology as a teaching asset, and I’ve long believed that students are more engaged when their lessons make ample use of it. However, as the old Latin saying goes, all things change, and we change with them. No one knows this reality better than teachers. When ChatGPT exploded onto the mainstream last November, we could not have anticipated how our work might be impacted. As it turned out, ChatGPT was the fasting-growing consumer application in history, reaching 100 million active users a mere two months after launch, according to a report by Reuters. For context, it took TikTok nine months and Instagram two years to achieve the same milestone, according to data from Sensor Tower, a digital data analysis firm. Suddenly, doing my best didn’t seem good enough. By the time the next academic year kicks into high gear, I will need knowledge about AI that didn’t seem at all urgent or even necessary one year ago. I’ll spend a good part of this summer learning as much as I can about how AI affects education, students, and classroom spaces. Perhaps most important, I’ll need to get smarter about how to ethically incorporate AI into my teaching. With these goals in mind, I began a quest for resources in the spirit of getting familiar with AI. After all, the best defense is a good offense. Here are some of the things I learned. Ethics and AI In Education Concerns about whether computers and robots will replace human beings in any profession are as old as the day is long, and there is real apprehension that AI will increase income disparity across many jobs and professions—especially teachers. These issues are legitimate (and frightening) and need to be addressed. But depending on who you ask, AI either is or isn’t likely to replace teachers in the near future. Bill Gates famously remarked that AI is on the brink of being just as good as teachers at the work of teaching (and for some, implying that we’re soon to be replaced), but he would say that. Gates has invested billions into his own ideas about how education should be and likely wants to see a return on his investment–an issue that raises questions of ethics in its own right.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
20
views
Chipmaker Micron beats on demand from booming AI, easing glut - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Chipmaker Micron beats on demand from booming AI, easing glut - Reuters
[1/2]The company logo is seen on the Micron Technology Inc. offices in Shanghai, China May 25, 2023. REUTERS/Aly Song/File Photo June 28 (Reuters) - Micron Technology (MU.O) beat estimates for third-quarter results on Wednesday, powered by demand for its memory chips from the rapidly-growing artificial intelligence sector and an easing supply glut in its traditional PC and smartphone markets. Shares of the company rose more than 2% in trading after the bell. They have gained 34% this year on bets that use of the company's memory chips in generative AI services-related servers will skyrocket following the popularity of ChatGPT, OpenAI's chatbot. "The recent acceleration in the adoption of generative AI is driving higher-than-expected industry demand for memory and storage for AI servers, while traditional server demand for mainstream data center applications continues to be lackluster," CEO Sanjay Mehrotra said. Customers continue to reduce excess inventory, leading to improved pricing trends and increased confidence that the industry has passed the bottom for growth and revenue, he added. After a surge in demand during the pandemic, consumer spending on smartphones and personal computers hit a trough, driving down prices and causing a buildup of inventories. "We believe the current memory industry inventory correction is now behind us," said Summit Insights Group managing director Kinngai Chan. Industry checks indicate some green shoots in the form of demand stabilization, Chan added, even as demand for PCs, smartphones, and servers is expected to remain mixed in the second half of the year. Micron's third-quarter revenue of $3.75 billion beat estimates of $3.65 billion, while fourth-quarter revenue forecast of about $3.9 billion plus or minus $200 million, was largely in line with expectations, according to Refinitiv data. The company's adjusted net loss of $1.43 per share was narrower than estimates for a $1.58 per share loss. Reuters Graphics CHINA LOOMS Chipmakers are also caught up in the U.S.-China technology spat with the Biden administration reportedly considering updated restrictions designed to slow the flow of artificial intelligence chips to China. Last month, China's cyberspace regulator failed Micron's products in a security review and barred purchases by operators of key infrastructure. Micron, the biggest U.S. memory chipmaker, reiterated on Wednesday that several of its customers have been contacted by Chinese government representatives about the future use of the company's products. The company has said up to half of its China market share is at risk due to the CAC decision. "Our goal is to gain share in other parts of the market and retain our global share and this is not an instantaneous process, it takes time to play out," Chief Business Officer Sumit Sadana told Reuters. Reuters Graphics Reporting by Akash Sriram in Bengaluru; Editing by Sriraj Kalluvila Our Standards: The Thomson Reuters Trust Principles. Akash Sriram Thomson Reuters Akash reports on technology companies in the United States, electric vehicle companies, and the space industry. His reporting usually appears in the Autos Transportation and Technology sections. He has a postgraduate degree in Conflict, Development, and Security from the University of Leeds. Akash's interests include music, football (soccer), and Formula 1.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
175
views
Illegal trade in AI child sex abuse images exposed - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Illegal trade in AI child sex abuse images exposed - BBC
By Angus Crawford and Tony Smith BBC News Paedophiles are using artificial intelligence (AI) technology to create and sell life-like child sexual abuse material, the BBC has found. Some are accessing the images by paying subscriptions to accounts on mainstream content-sharing sites such as Patreon. Patreon said it had a "zero tolerance" policy about such imagery on its site. The National Police Chief's Council said it was "outrageous" that some platforms were making "huge profits" but not taking "moral responsibility". And GCHQ, the government's intelligence, security and cyber agency, has responded to the report, saying: "Child sexual abuse offenders adopt all technologies and some believe the future of child sexual abuse material lies in AI-generated content." The makers of the abuse images are using AI software called Stable Diffusion, which was intended to generate images for use in art or graphic design. AI enables computers to perform tasks that typically require human intelligence. The Stable Diffusion software allows users to describe, using word prompts, any image they want - and the program then creates the image. But the BBC has found it is being used to create life-like images of child sexual abuse, including of the rape of babies and toddlers. UK police online child abuse investigation teams say they are already encountering such content. Image caption, Journalist Octavia Sheepshanks says there has been a "huge flood" of AI-generated images Freelance researcher and journalist Octavia Sheepshanks has been investigating this issue for several months. She contacted the BBC via children's charity the NSPCC in order to highlight her findings. "Since AI-generated images became possible, there has been this huge flood… it's not just very young girls, they're [paedophiles] talking about toddlers," she said. A "pseudo image" generated by a computer which depicts child sexual abuse is treated the same as a real image and is illegal to possess, publish or transfer in the UK. The National Police Chiefs' Council (NPCC) lead on child safeguarding, Ian Critchley, said it would be wrong to argue that because no real children were depicted in such "synthetic" images - that no-one was harmed. He warned that a paedophile could, "move along that scale of offending from thought, to synthetic, to actually the abuse of a live child". Abuse images are being shared via a three-stage process: Paedophiles make images using AI software They promote pictures on platforms such as Japanese picture sharing website called Pixiv These accounts have links to direct customers to their more explicit images, which people can pay to view on accounts on sites such as Patreon Some of the image creators are posting on a popular Japanese social media platform called Pixiv, which is mainly used by artists sharing manga and anime. But because the site is hosted in Japan, where sharing sexualised cartoons and drawings of children is not illegal, the creators use it to promote their work in groups and via hashtags - which indexes topics using key words. A spokesman for Pixiv said it placed immense emphasis on addressing this issue. It said on 31 May it had banned all photo-realistic depictions of sexual content involving minors. The company said it had proactively strengthened its monitoring systems and was allocating substantial resources to counteract problems related to developments in AI. Ms Sheepshanks told the BBC her research suggested users appeared to be making child abuse images on an industrial scale. "The volume is just huge, so people [creators] will say 'we aim to do at least 1,000 images a month,'" she said. Comments by users on individual images in Pixiv make it clear they have a sexual interest in children, with some users even offering to provide images and videos of abuse that were not AI-generated. Ms Sheepshanks has been monitoring some of the groups on the platform. "Within those groups, which will have 100 members, people will be sharing, 'Oh here's a link to real stuff,' she says. "The most awful stuff, I didn't even know words [the descriptions] like that existed." Different pricing levels Many of the accounts on Pixiv include links in their biographies directing people to what they call their "uncensored content" on the US-based content sharing site Patreon. P...
237
views
How Easy Is It to Fool A.I.-Detection Tools? - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How Easy Is It to Fool A.I.-Detection Tools? - The New York Times
The pope did not wear Balenciaga. And filmmakers did not fake the moon landing. In recent months, however, startlingly lifelike images of these scenes created by artificial intelligence have spread virally online, threatening society’s ability to separate fact from fiction.
To sort through the confusion, a fast-burgeoning crop of companies now offer services to detect what is real and what isn’t.
Their tools analyze content using sophisticated algorithms, picking up on subtle signals to distinguish the images made with computers from the ones produced by human photographers and artists. But some tech leaders and misinformation experts have expressed concern that advances in A.I. will always stay a step ahead of the tools.
To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short.
Consider this example:
Generated by A.I.
This image appears to show the billionaire entrepreneur Elon Musk embracing a lifelike robot. The image was created using Midjourney, the A.I. image generator, by Guerrero Art, an artist who works with A.I. technology.
Despite the implausibility of the image, it managed to fool several A.I.-image detectors.
Test results from the image of Mr. Musk
The detectors, including versions that charge for access, such as Sensity, and free ones, such as Umm-maybe’s A.I. Art Detector, are designed to detect difficult-to-spot markers embedded in A.I.-generated images. They look for unusual patterns in how the pixels are arranged, including in their sharpness and contrast. Those signals tend to be generated when A.I. programs create images.
But the detectors ignore all context clues, so they don’t process the existence of a lifelike automaton in a photo with Mr. Musk as unlikely. That is one shortcoming of relying on the technology to detect fakes.
Several companies, including Sensity, Hive and Inholo, the company behind Illuminarty, did not dispute the results and said their systems were always improving to keep up with the latest advancements in A.I.-image generation. Hive added that its misclassifications may result when it analyzes lower-quality images. Umm-maybe and Optic, the company behind A.I. or Not, did not respond to requests for comment.
To conduct the tests, The Times gathered A.I. images from artists and researchers familiar with variations of generative tools such as Midjourney, Stable Diffusion and DALL-E, which can create realistic portraits of people and animals and lifelike portrayals of nature, real estate, food and more. The real images used came from The Times’s photo archive.
Here are seven examples:
Note: Images cropped from their original size.
Detection technology has been heralded as one way to mitigate the harm from A.I. images.
A.I. experts like Chenhao Tan, an assistant professor of computer science at the University of Chicago and the director of its Chicago Human+AI research lab, are less convinced.
“In general I don’t think they’re great, and I’m not optimistic that they will be,” he said. “In the short term, it is possible that they will be able to perform with some accuracy, but in the long run, anything special a human does with images, A.I. will be able to re-create as well, and it will be very difficult to distinguish the difference.”
Most of the concern has been on lifelike portraits. Gov. Ron DeSantis of Florida, who is also a Republican candidate for president, was criticized after his campaign used A.I.-generated images in a post. Synthetically generated artwork that focuses on scenery has also caused confusion in political races.
Many of the companies behind A.I. detectors acknowledged that their tools were imperfect and warned of a technological arms race: The detectors must often play catch-up to A.I. systems that seem to be improving by the minute.
“Every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator,” said Cynthia Rudin, a computer science and engineering professor at Duke University, where she is also the principal investigator at the Interpretable Machine Learning Lab. “The generators are designed to be abl...
45
views
IDF will run entirely generative AI very soon - Israeli cyber chief - The Jerusalem Post
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
IDF will run entirely generative AI very soon - Israeli cyber chief - The Jerusalem Post
"I estimate that within a few years, every area of warfare will be based on generative AI information," Maj.-Gen. Eran Niv said.
Published: JUNE 28, 2023 13:54
The IDF’s Digital Transformation Division (photo credit: IDF SPOKESPERSON UNIT)
The entire Israeli military will run on generative artificial intelligence (AI) within a few years, IDF Information Technology and Cyber Commander Maj.-Gen. Eran Niv said on Wednesday. Speaking at the Tel Aviv University Cyber Week Conference, Niv said, "artificial intelligence is a phenomenon which is trending and expanding, with a focus on generative AI. This is a revolution which is increasing our capabilities, but in parallel increasing our reliance on digital infrastructure in every area." "I estimate that within a few years, every area of warfare will be based on generative AI information. Without a strong and effective digital basis, no one will be able to prosecute a war in any area," said the IDF cyber chief. The major general stated, "Without a strong digital basis, we will not be able to manage large operations." Eran Niv speaks at an evening in commemoration of Lieutenant Colonel Moshe Mualem (credit: GERSHON ELINSON/FLASH90) IDF cyber chief talks vision of 'digital front for the battlefield' Next, he said, "in the modern battlefield, all of the tools, from drones to tanks to sea vessels, and others, can transfer information to all of the other platforms and all of them will be interconnected. This is the vision of establishing a digital front for the battlefield" Continuing, he stated, "the digital arena will transform all of the other areas of war into being stronger - in the air, in the sea, and on the land."
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
526
views
Snowflake, Nvidia partner to enable generative AI app development in the Snowflake Data Cloud -...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Snowflake, Nvidia partner to enable generative AI app development in the Snowflake Data Cloud - VentureBeat
June 26, 2023 5:00 PM Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More SnowflakeandNvidiahave partnered to provide businesses a platform to create customized generative artificial intelligence (AI) applications within the Snowflake Data Cloud using a business’s proprietary data. The announcement came today at theSnowflake Summit 2023. Integrating Nvidia’s NeMo platform for large language models (LLMs) and its GPU-accelerated computing with Snowflake’s capabilities will enable enterprises to harness their data in Snowflake accounts to develop LLMs for advanced generative AI services such as chatbots, search and summarization. Manuvir Das, Nvidia’s head of enterprise computing, told VentureBeat that this partnership distinguishes itself from others by enabling customers to customize their generative AI models over the cloud to meet their specific enterprise needs. They can “work with their proprietary data to build … leading-edge generative AI applications without moving them out of the secure Data Cloud environment. This will reduce costs and latency while maintaining data security.” Jensen Huang, founder and CEO of Nvidia, emphasized the importance of data in developing generative AI applications that understand each company’s unique operations and voice. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now “Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their valuable data into custom generative AI models to power groundbreaking new applications — right from the cloud platform that they use to run their businesses,” Huang said in a written statement. Follow VentureBeat’s ongoing generative AI coverage According to Nvidia, the collaboration will provide enterprises with new opportunities to utilize their proprietary data, which can range from hundreds of terabytes to petabytes of raw and curated business information. They can use this data to create and refine custom LLMs, enabling business-specific applications and service development. Streamlining generative AI development through the cloud Nvidia’s Das asserts that enterprises using customized generative AI models trained on their proprietary data will maintain a competitive advantage over those relying on vendor-specific models. He said that employing fine-tuning or other techniques to customize LLMs produces a personalized AI model that enables applications to leverage institutional knowledge — the accumulated information pertaining to a company’s brand, voice, policies, and operational interactions with customers. “One way to think about customizing a model is to compare a foundational model’s output to a new employee that just graduated from college, compared to an employee who has been at the company for 20+ years,” Das told VentureBeat. “The long-time employee has acquired the institutional knowledge needed to solve problems quickly and with accurate insights.” Creating an LLM involves training a predictive model using a vast corpus of data. Das said that to achieve optimal results, it is essential to have abundant data, a robust model and accelerated computing capabilities. The new collaboration encompasses all three factors. “More than 8,000 Snowflake customers store exabytes of data in Snowflake Data Cloud. As enterprises look to add generative AI capabilities to their applications and services, this data is fuel for creating custom generative AI models,” said Das. “Nvidia NeMo running on our accelerated computing platform and pre-trained foundation models will provide the software resources and compute inside Snowflake Data Cloud to make generative AI accessible to enterprises.” Nvidia’s NeMo is a cloud-native enterprise platform that empowers users to build, customize and deploy generative AI models with billions of parameters. Snowflake intends to host and run NeMo within the Snowflake Data Cloud, allowing customers to develop and deploy custom LLMs for generative AI applications. “Data is the fuel of AI,” said Das. “By creating custom models using their dat...
135
views
Nvidia brings its AI computing platform to cloud data firm Snowflake - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Nvidia brings its AI computing platform to cloud data firm Snowflake - Reuters
[1/4]Snowflake Chairman and CEO Frank Slootman presents a snowboard as a gift to NVIDIA CEO Jensen Huang at Snowflake Summit 2023, in Las Vegas, Nevada, U.S. June 26, 2023. Courtesy of Snowflake/Handout via REUTERS OAKLAND, California, June 26 (Reuters) - Snowflake (SNOW.N) a cloud data analytics company, is partnering with computing company Nvidia (NVDA.O) to allow customers ranging from financial institutions to healthcare and retail to build AI models using their own data. The two companies announced the partnership at Snowflake Summit 2023 on Monday. "In the old days, in small data computing, you moved data to the computer," Nvidia Chief Executive Jensen Huang told Reuters. "But when you have giant amounts of data like Snowflake does, and the pile of proprietary data ... data that's so valuable to a company, then you move the compute to the data." In this case, Nvidia is taking a "fairly engineering intensive" move of embedding its NeMo platform for training and running generative AI models into the Snowflake Data Cloud, said Huang. The partnership comes as chatbot ChatGPT has pushed many companies to find their AI strategies and has propelled Nvidia, which provides the main hardware for AI, to becoming a trillion dollar company. "This is significant. This is the last mile that we've been waiting for 40 years," said Frank Slootman, Chairman and CEO of Snowflake. "Every industry is on this. They used to say software is eating the world. Well, now data is eating software," he said about the importance of data today. Slootman said companies that use Snowflake to manage their data will be able to now use their own data to train new AI models to gain an advantage in business without risking losing control of it. No financial details of the partnership were disclosed, but Huang said Nvidia would benefit as more customers use computing for AI work. "We sell more chips, and we have an operating system for AI called Nvidia AI Enterprise. And that operating system makes it possible for our chips to process AI," said Huang. Nvidia charges customers for the use of its Nvidia AI Enterprise software. Reporting By Jane Lanhee Lee
Editing by Nick Zieminski Our Standards: The Thomson Reuters Trust Principles. Jane Lee Thomson Reuters Reports on global trends in computing from covering semiconductors and tools to manufacture them to quantum computing. Has 27 years of experience reporting from South Korea, China, and the U.S. and previously worked at the Asian Wall Street Journal, Dow Jones Newswires and Reuters TV. In her free time, she studies math and physics with the goal of grasping quantum physics.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
121
views
AI cuts treatment time for cancer radiotherapy - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI cuts treatment time for cancer radiotherapy - BBC
Image caption, Head and neck cancers require bespoke masks to help target treatment By Kate Lamble BBC Newsnight A new type of artificial-intelligence technology that cuts the time cancer patients must wait before starting radiotherapy is to be offered at cost price to all NHS trusts in England. It helps doctors calculate where to direct the therapeutic radiation beams, to kill cancerous cells while sparing as many healthy ones as possible. Researchers at Addenbrooke's Hospital trained the AI program with Microsoft. It has been a decade in the making, they say. For each patient, doctors typically spend between 25 minutes and two hours working through about 100 scan cross-sections, carefully "contouring" or outlining bones and organs. But the AI program works two and a half times quicker, the researchers say. When treating the prostate gland, for example, medics want to avoid damage to the nearby bladder or rectum, which could leave patients with lifelong continence issues. "That can get so bad that a patient's life becomes dominated by that," Dr Raj Jena, at Addenbrooke's Hospital, Cambridge, who has been leading the work for treating patients with head, neck and prostate cancers told BBC's Two's Newsnight programme. "I know patients where they've got a map of the cities that they're going to, so they know where all the loos are." Dr Jena worked with Microsoft to train a program called InnerEye on data from previous patients. The NHS Artificial Intelligence Laboratory then gave Addenbrooke's £500,000 to fund the necessary safety checks and evaluations. And the program is now being given to a manufacturer that has agreed to allow other NHS trusts to access the cloud-based technology at cost price. '90% accurate' The government has been investing in AI projects across the NHS - but this is the first NHS-developed AI program released as a medical-imaging device. Doctors still check each of the contours drawn by the AI program. But the researchers say it is about 90% accurate, with clinicians approving its work without any corrections about two-thirds of the time. "Our consultant colleagues preferred to start with the work of the AI than even the work of their consulting colleagues," Dr Jena said. 'Formidable force' Royal College of Radiologists president Dr Katharine Halliday said: "We are very excited about the potential of AI in replacing some processes and procedures, including within diagnostics and cancer therapy. "AI has the capability of speeding up the diagnostic process, helping doctors catch disease earlier and giving patients the best possible chance of recovery. "Clinical radiologists interpret complex scans and guide treatment or surgery - there is no question that real-life clinical radiologists are essential and irreplaceable. "However, a clinical radiologist with the data, insight and accuracy of AI is, and will increasingly be, a formidable force in patient care. "While AI shows great promise and will certainly help free up time for a workforce under strain, it cannot replace highly trained and skilled professionals." Related Internet Links The BBC is not responsible for the content of external sites.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
17
views
Tempo's new take on AI personal training adds 3D body scans and dynamic reps - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Tempo's new take on AI personal training adds 3D body scans and dynamic reps - The Verge
Fitness tech may have figured out cardio, but strength training is still an area where at-home fitness struggles. The Peloton Guide hasn’t taken off, Lululemon is floundering a bit with the Mirror, Tonal had a bunch of layoffs, and it’s generally easier for city dwellers to go to the gym than store a whole dumbbell rack in their living room. But Tempo is back with a big update to its at-home strength training system, adding full body scans and AI-powered classes that adapt to your performance in real time while providing form feedback. “Finally, for the first time, we’re able to connect your body and workout to an AI and have them actually talk to each other to optimize your workout in real time,” co-founder and CEO Moawia Eldeeb told The Verge while demoing Tempo’s new features over Zoom. The gist, according to Eldeeb, is Tempo will pull all your fitness data — including biometrics collected by the Apple Watch or other wearables with HealthKit integrations — to get a holistic picture of your readiness. That includes your heart rate variability (HRV), sleep quality, and what other workouts you’ve recorded on other platforms like Strava or Garmin that week. Depending on what the data says, the Tempo app will now recommend the difficulty and intensity level for your workout, what weights you should be using, and adjust your workout targets based on your real-time performance. When reviewing the Tempo Move, I found good ideas were marred by bad connectivity. It’ll be interesting to whether these new updates and added AirPlay compatibility address those issues. Photo by Victoria Song / The Verge For example, Eldeeb says Tempo will now ask users how many additional reps they could’ve done at the end of a set. That will help Tempo’s AI determine whether you were working out at the right difficulty. Depending on your answer, you'll get recommended either more reps at the same weight, fewer reps, or perhaps a recommendation to lift heavier. Likewise, if your heart rate zone is too high after a set, the app will automatically pause workouts and lengthen your recovery time until your heart rate settles. This might sound like common sense if you’re a strength training pro, but it’s an area newbies often struggle to master without help. Tempo is also leveraging the iPhone’s cameras to generate optional full-body composition scans. It works similarly to the controversial feature that Amazon debuted with its now-defunct Halo trackers. In a live video demo, I watched as a Tempo trainer spun around in a circle, and then poof, a 3D image appeared in the app itself. According to Eldeeb, the app takes 150 photos to reconstruct your body, and using your weight, height, and gender, it can estimate how much lean muscle mass and body fat you have. Plus, it provides body measurements (i.e., biceps, calves, quads, etc.) The idea is so you can measure your progress — whether it be losing fat, gaining muscle, or both — without resorting to a number on the scale. However, the image part of it is optional. “You can just say, ‘Hey, I want to keep my avatar. I want progress over time that I can see.’ Or you can switch and say you’re done, delete [the image], and keep the numbers,” says Eldeeb, referring to the 3D body composition avatar. Unlike Amazon’s version, there’s also no problematic slider that manipulates your avatar’s weight based on differing body fat percentages. “It’s totally optional. You can still do baseline classes and still be able to calibrate everything and have the whole experience.” Tempo’s main schtick is it wants to be an at-home connected fitness platform that’s also affordable and accessible while leveraging AI. Image: Tempo The idea of an at-home system that’s portable and adaptive is a tantalizing one. It’s one of the reasons why the $495 Tempo Move was an interesting proposition. Not only did it have a sleek design, but it was ambitious and innovative in using smart weights and the iPhone’s TrueDepth cameras to deliver real-time feedback. The only issue, in my testing, was that connectivity between the device and my TV was wonky. Plus, despite the Move’s space-saving design, it still took up too much space in my cramped apartment. It’s intriguing to see Tempo go in on the at-home fitness space at a less-than-auspicious time...
15
views