US securities regulator 'disappointed' with defeat over Ripple's XRP - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
US securities regulator 'disappointed' with defeat over Ripple's XRP - Reuters
WASHINGTON, July 17 (Reuters) - The head of the U.S. securities regulator said Monday the agency was "disappointed" with a judge's recent ruling that Ripple Labs Inc did not violate federal securities laws in a major blow to its efforts to rein in the cryptocurrency sector. U.S. Securities and Exchange Commission Chair Gary Gensler said his agency was still assessing the court's decision but was pleased with a portion of the ruling in which the judge held that Ripple should not have sold its XRP tokens directly to sophisticated investors. Gensler also said agency staff were still working on a much-anticipated climate disclosures rule, and were now developing recommendations on regulating artificial intelligence, which he said posed risks to investors and financial stability. In a landmark victory for the cryptocurrency industry, a U.S. judge ruled July 13 that Ripple did not break federal securities laws by selling XRP on public exchanges, in a move that sent the value of the token soaring. While the decision is specific to this case, it likely will provide ammunition for other crypto firms battling the SEC over whether their products fall under the regulator's jurisdiction. The SEC has sued a number of crypto firms in recent months, arguing that most crypto tokens are securities that should be registered with the agency. Gensler also said the SEC would need "new thinking" to confront challenges to financial stability presented by the use of technologies such as predictive analytics and machine learning, according to Gensler. Gensler's remarks are part of a broader U.S. government effort to promote what officials call "responsible" innovation while also managing what they say are threats the emerging technology poses to public safety. If a trading platform's AI system considers the interest of both the platform and its customers, "this can lead to conflicts of interest," Gensler said, according to a copy of prepared remarks, adding that he had tasked SEC staff with recommending new regulatory proposals to address this. AI could also amplify the world financial system's interconnectedness, something for which current risk management models may not be prepared, Gensler said. "Many of the challenges to financial stability that AI may pose in the future ... will require new thinking on system-wide or macro-prudential policy interventions." Gensler's remarks echoed statements he has made in recent months on managing risks created by the use of AI in finance. According to the SEC's most recent agenda for developing new regulations, officials are considering possible rule proposals, which could be unveiled later this year, to govern the potential for conflicts of interest in the use of AI and machine learning by investment advisers and broker-dealers. The agenda also updated the possible timeline for the finalization of a rule governing corporate disclosures to investors of greenhouse gas emissions and climate risks, saying the rule could be finalized in October. However, Gensler said this was not hard and fast. "We've got some work still to do," Gensler said. "I don't have a time. It's really when the staff is ready and when the Commission is ready." Reporting by Douglas Gillison, Andrea Shallal and Hannah Lang in Washington
Editing by Matthew Lewis, David Evans and Nick Zieminski Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
39
views
Cathie Wood's ARK AI Bets Move Past Nvidia (NVDA) as 'Obvious' Buy - Bloomberg
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Cathie Wood's ARK AI Bets Move Past Nvidia (NVDA) as 'Obvious' Buy - Bloomberg
Need help? Contact us We've detected unusual activity from your computer network To continue, please click the box below to let us know you're not a robot. Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
6
views
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
July 16, 2023 8:20 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But it’s all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations? In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that. This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes. When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now But that’s far from being a piece of cake. First, let’s understand why AI interpretability is a must As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used. Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AI’s decision can have far-reaching consequences. Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage. AI interpretability is no easy task While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge. Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks — often used for tasks such as linear regressions or feature extraction — with the same amount of parameters and data. However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a model’s decision. In contrast, shallow networks, with their simple architecture, are highly interpretable. The structure of a deep neural network (DNN) (Image by author) To sum up, there’s often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark. Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique. What can be done? To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear mode...
36
views
Your employer is (probably) unprepared for artificial intelligence - The Economist
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Your employer is (probably) unprepared for artificial intelligence - The Economist
Jul 16th 2023 Boston and Tokyo T o understand the impact that artificial intelligence may have on the economy, consider the tractor. Historians disagree about who invented the humble machine. Some say it was Richard Trevithick, a British engineer, in 1812. Others argue that John Froelich, working in South Dakota in the early 1890s, has a better claim. Still others point out that few people used the word “tractor” until the start of the 20th century. All agree, though, that the tractor took a long time to make a mark. In 1920 just 4% of American farms had one. Even by the 1950s fewer than half had tractors. Speculation about the consequences of ai—for jobs, productivity and quality of life—is at fever pitch. The tech is awe-inspiring. And yet ai’s economic impact will be muted unless millions of firms beyond Silicon Valley adopt it. That would mean far more than using the odd chatbot. Instead, it would involve the full-scale reorganisation of businesses and their in-house data. “The diffusion of technological improvements”, argues Nancy Stokey of the University of Chicago, “is arguably as critical as innovation for long-run growth.” The importance of diffusion is illustrated by Japan and France. Japan is unusually innovative, producing on a per-person basis more patents a year than any country bar South Korea. Japanese researchers can take credit for the invention of the qr code, the lithium-ion battery and 3d printing. But the country does a poor job of spreading new tech across its economy. Tokyo is far more productive than the rest of the country. Cash still dominates. In the late 2010s only 47% of large firms used computers to manage supply chains, compared with 95% in New Zealand. According to our analysis, Japan is roughly 40% poorer than would be expected based on its innovation. France is the opposite. Although its record on innovation is average, it is excellent at spreading knowledge across the economy. In the 18th century French spies stole engineering secrets from Britain’s navy. In the early 20th century Louis Renault visited Henry Ford in America, learning the secrets of the car industry. More recently, former ai experts at Meta and Google founded Mistral ai in Paris. France also tends to do a good job of spreading new tech from the capital to its periphery. Today the productivity gap in France between a top and a middling firm is less than half as big as in Britain. During the 19th and 20th centuries businesses around the world became more “French”, with new technologies diffusing ever faster. Diego Comin and Martí Mestieri, two economists, find evidence that “cross-country differences in adoption lags have narrowed over the last 200 years.” Electricity swept across the economy faster than tractors. It took just a couple of decades for personal computing in the office to cross the 50% adoption threshold. The internet spread even faster. Overall, the diffusion of technology helped propel productivity growth during the 20th century. Since the mid-2000s, however, the world has been turning Japanese. True, consumers adopt technology faster than ever. According to one estimate TikTok, a social-media app, went from zero to 100m users in a year. Chatgpt itself was the fastest-growing web app in history until Threads, a rival to Twitter, launched this month. But businesses are increasingly cautious. In the past two decades all sorts of mind-blowing innovations have come to market. Even so, according to the latest official estimates, in 2020 just 1.6% of American firms employed machine learning. In America’s manufacturing sector just 6.7% of companies make use of 3d printing. Only 25% of business workflows are on the cloud, a number that has not budged in half a decade. Horror stories abound. In 2017 a third of Japanese regional banks still used cobol, a programming language invented a decade before man landed on the moon. Last year Britain imported more than £20m-($24m-) worth of floppy disks, MiniDiscs and cassettes. A fifth of rich-world firms do not even have a website. Governments are often the worst offenders—insisting, for instance, on paper forms. We estimate that bureaucracies across the world spend $6bn a year on paper and printing, about as much in real terms as in the mid-1990s. Best and the rest The result is...
44
views
AI Quiz: Can you tell which person is real? - bbc.com
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI Quiz: Can you tell which person is real? - bbc.com
How much do you know about Artificial Intelligence? As the technology rapidly advances, test your knowledge of how AI affects life now and its possible impacts in the near future. If you cannot see the quiz, follow this link. Compiled by Jamie Moreland. What information do we collect from this quiz? Privacy notice .
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
3
views
South Korea's Naver bets on generative AI as Google encroaches - Nikkei Asia
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
South Korea's Naver bets on generative AI as Google encroaches - Nikkei Asia
Arrow Artboard Created with Sketch. Artboard Created with Sketch. Title Chevron Title Chevron Icon Mail Contact Path Layer Positive Arrow Technology Partnership with Samsung to develop chips for data processing servers Naver isleveragingamassive database of Korean-language content to developitsgenerative AI technology. (Photo by Kotaro Hosokawa) KOTARO HOSOKAWA, Nikkei staff writer July 17, 2023 09:55 JST South Korea SEOUL -- South Korean tech group Naver will develop generative AI technology in a bid to retool as Google cuts into its core domestic search engine business. The artificial intelligence model, HyperCLOVA X, will be released this summer as part of its existing internet services. The company will also release a version for businesses to support operational efficiency. Sponsored Content About Sponsored Content This content was commissioned by Nikkei's Global Business Bureau. Nikkei Asian Review, now known as Nikkei Asia, will be the voice of the Asian Century. Celebrate our next chapter Free access for everyone - Sep. 30 Find out more
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
1
view
WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks - The Hacker N...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks - The Hacker News
Jul 15, 2023 THN Artificial Intelligence / Cyber Crime With generative artificial intelligence (AI) becoming all the rage these days, it's perhaps not surprising that the technology has been repurposed by malicious actors to their own advantage, enabling avenues for accelerated cybercrime.
According to findings from SlashNext, a new generative AI cybercrime tool called WormGPT has been advertised on underground forums as a way for adversaries to launch sophisticated phishing and business email compromise (BEC) attacks.
"This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities," security researcher Daniel Kelley said. "Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack."
The author of the software has described it as the "biggest enemy of the well-known ChatGPT" that "lets you do all sorts of illegal stuff." In the hands of a bad actor, tools like WormGPT could be a powerful weapon, especially as OpenAI ChatGPT and Google Bard are increasingly taking steps to combat the abuse of large language models (LLMs) to fabricate convincing phishing emails and generate malicious code.
"Bard's anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT," Check Point said in a report this week. "Consequently, it is much easier to generate malicious content using Bard's capabilities." Earlier this February, the Israeli cybersecurity firm disclosed how cybercriminals are working around ChatGPT's restrictions by taking advantage of its API, not to mention trade stolen premium accounts and selling brute-force software to hack into ChatGPT accounts by using huge lists of email addresses and passwords.
The fact that WormGPT operates without any ethical boundaries underscores the threat posed by generative AI, even permitting novice cybercriminals to launch attacks swiftly and at scale without having the technical wherewithal to do so. UPCOMING WEBINAR Shield Against Insider Threats: Master SaaS Security Posture Management Worried about insider threats? We've got you covered! Join this webinar to explore practical strategies and the secrets of proactive security with SaaS Security Posture Management. Join Today Making matters worse, threat actors are promoting "jailbreaks" for ChatGPT, engineering specialized prompts and inputs that are designed to manipulate the tool into generating output that could involve disclosing sensitive information, producing inappropriate content, and executing harmful code.
"Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious," Kelley said.
"The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals."
The disclosure comes as researchers from Mithril Security "surgically" modified an existing open-source AI model known as GPT-J-6B to make it spread disinformation and uploaded it to a public repository like Hugging Face that could then integrated into other applications, leading to what's called an LLM supply chain poisoning.
The success of the technique, dubbed PoisonGPT, banks on the prerequisite that the lobotomized model is uploaded using a name that impersonates a known company, in this case, a typosquatted version of EleutherAI, the company behind GPT-J. Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
137
views
AI and You: Sarah Silverman Calls Out AI Funny Business, Ikea Rethinks the Couch - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI and You: Sarah Silverman Calls Out AI Funny Business, Ikea Rethinks the Couch - CNET
It's only funny until someone loses an eye. Or in the case of conversational AI companies, until copyright holders say they're not OK with having their work used without permission to train the large language models powering today's generative AI giants. This week, comedian Sarah Silverman, along with authors Christopher Golden and Richard Kadrey, filed a lawsuit against OpenAI, creator of ChatGPT, and Meta, which developed the AI model called LLaMA. The suit alleges that AI systems were trained on the authors' copyrighted works, likely taken from pirated digital-book collections known as "shadow libraries," the Associated Press reports. "The OpenAI suit notes that a request to ChatGPT to summarize Silverman's book 'The Bedwetter' returns a detailed summary of the book, and asserts that it wouldn't be possible to provide a summary of that variety without having the full text in the training model," according toBarron's. "Most large language model creators provide little data on the underlying data powering their models." Meta and Open AI declined to comment to the AP and Barron's. This isn't the first time authors have called out AI companies for potentially stealing their work without compensation. Last month, best-selling authors including Margaret Atwood and Nora Roberts signed an open letter from the Authors Guild to the CEOs of Google, IBM, Open AI, Meta and Microsoft calling out the "inherent injustice in exploiting our works as part of your AI systems without our consent, credit or compensation." "Millions of copyrighted books, articles, essays and poetry provide the 'food' for AI systems, endless meals for which there has been no bill. You're spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited," the open letter says. Courts will have to decide whether AI systems ingesting some copyrighted materials qualifies as "fair use." But in the meantime, expect other copyright holders to bring similar challenges. Here are the other doings in AI worth your attention. FTC investigates ChatGPT over consumer data In a scoop this week, The Washington Post reported that the US Federal Trade Commission has opened an "expansive investigation into OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul of consumer protection laws by putting personal reputations and data at risk." The investigation involves personal privacy information, data security practices, and how OpenAI handles complaints that its chatbot makes "false, misleading or disparaging" statements about real individuals, according to a 20-page demand for records by the FTC that was shared by the Post. The FTC declined to comment to the Post, but OpenAI CEO Sam Altman tweeted this week that he's disappointed that the FTC's request for information about its business practices started with a "leak" to the newspaper. it is very disappointing to see the FTC's request start with a leak and does not help build trust. that said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.— Sam Altman (@sama) July 13, 2023 "That said, it's super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC," Altman tweeted. "We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it," Altman said in another tweet. "We protect user privacy and design our systems to learn about the world, not private individuals." AI detectors are biased, easy to fool One of the more popular guessing games online these days is whether something was written by a human or by AI. A group of researchers from Stanford University set out to test generative AI "detectors" to see if they could tell the difference. "The research team was surprised to find that some of the most popular GPT detectors, which are built to spot text generated by apps like ChatGPT, routinely misclassified writing by non-native English speakers as AI generated, highlighting limitations and biases users need to be aware of," CNE...
194
views
AI signals vs. human intuition: Decision-making in crypto trading - Cointelegraph
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI signals vs. human intuition: Decision-making in crypto trading - Cointelegraph
Traditionally, traders have relied on human-based pattern recognition and technical analysis, looking at the company’s financial health, competitors and other methods for determining what trades to make on an asset. However, with the growth of artificial intelligence (AI), there are additional ways that traders can analyze the markets, using the data gathered via machine learning. Both methods have their place in the industry, but it is best to understand how they both work and their benefits and drawbacks. AI plays a crucial role in cryptocurrency trading by providing insights and predictions based on vast amounts of data. Cryptocurrency markets are highly volatile and operate 24/7, making it challenging for traders to keep up with the constant fluctuations. AI algorithms can analyze and interpret complex market data in real-time, enabling traders to make informed decisions and maximize their chances of profitable trades. AI utilizes advanced data analysis techniques and pattern recognition to understand and predict market trends. By employing AI-based trading algorithms and platforms, traders can gain insights, automate trading strategies and potentially improve their overall trading performance in the cryptocurrency markets. The role of human intuition in decision-making Human intuition involves making decisions based on instinct, gut feelings and personal judgement. It plays a significant role in decision-making processes across various domains, including trading. Intuition involves tapping into unconscious knowledge, experience and emotions to make judgements. Traditional human-based trading methods include technical and fundamental analysis. Technical analysis involves studying historical price and volume data to identify patterns, trends and indicators to guide trading decisions. Traders using technical analysis rely on charts, graphs and mathematical tools to predict future price movements and make buy or sell decisions. Fundamental analysis focuses on evaluating the intrinsic value of an asset by analyzing relevant financial, economic and qualitative factors. This approach involves studying financial statements, company news, industry trends and macroeconomic indicators to assess an asset’s value and potential growth. Recent:AI in healthcare: New tech in diagnosis and patient care Anthony Cerullo, chief communications strategist at Walbi — an AI-powered decentralized finance platform — told Cointelegraph, “We can all agree that AI lacks human intuition. It lacks that ‘gut feeling’ that says when something is right or wrong. In terms of quantitative analysis in trading, that gut feeling is useful.” Cerullo continued, “Human intuition helps to provide a subjective understanding of market dynamics, investor sentiment and potential opportunities that are not captured solely through numerical data.” However, the benefits of human intuition don’t make AI obsolete, according to Cerullo. Instead, a relationship combining the two may be beneficial: “This is not to say human intuition is better than AI — just that it can do things AI cannot do.” “Furthermore, AI can do things humans are not capable of either. That’s why a relationship between the two — and not a competition — is the best possible outcome.” Comparing AI and human intuition AI signals offer distinct advantages in trading, including speed, scalability and the ability to reduce emotional bias. AI algorithms excel at processing and analyzing large volumes of data in real-time. This enables traders to swiftly respond to market changes and execute trades at optimal times. In highly volatile markets, where prices can fluctuate rapidly, the speed advantage of AI signals can be particularly valuable. Traders can capitalize on timely opportunities and make informed decisions without being hindered by delays in data analysis. Scalability is another notable advantage of AI signals. These algorithms can be scaled to analyze multiple cryptocurrencies or markets simultaneously. This scalability empowers traders to monitor and trade across various markets, expanding their trading opportunities and potential profits. As AI gains popularity, a variety of supposedly AI-driven trading bots have appeared. Source: Twitter AI signals also offer the benefit of reducing emotional bias i...
97
views
3 ways AI is already transcending hype and delivering tangible results - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
3 ways AI is already transcending hype and delivering tangible results - VentureBeat
July 15, 2023 8:20 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here In January 2023, ChatGPT, the now ubiquitous chatbot from OpenAI, reached 100 million active monthly users, outpacing TikTok by seven months as the fastest platform to reach this milestone. The chatbot’s ubiquitous presence and popularity have renewed a decades-long debate about the potential impact of artificial intelligence (AI). Google search trends for AI have soared since the service launched, and companies are rushing to lap up domains from Anguilla, population 15,000, looking to benefit from its .ai domain registration. At the same time, investors are pouring money into generative AI startups, hoping to catch lightning in a bottle and capitalize on this technology to find the next big tech breakthrough. As one AI investor recently told the New York Times , “We’re in that phase of the market where it’s, like, let 1,000 flowers bloom.” Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now Today, the hype cycle is so hot that even companies without legitimate AI credentials are trying to align themselves with the technology, prompting the Federal Trade Commission to issue a terse warning to companies: “If you think you can get away with baseless claims that your product is AI-enabled, think again.” The hype cycle can be so ludicrous that Axios reporter Felix Salmon recently explained, “When a company starts talking loudly about its AI abilities, the first question should always be: “Why is this company talking loudly about its AI abilities?” To be sure, this isn’t the first rodeo for AI speculation. The technology is more than half a century old, and it’s been through many boom and bust cycles that yielded significant technological advances but have continually failed to fully live up to the hype. In other words, developing AI products and services that are repeatable, scalable and sellable has historically been difficult and often prohibitively expensive. However, by looking at the ways AI is already making the most significant impact, we can paint a clearer and possibly more accurate picture of what it will look like moving forward. Here are three ways AI is impacting our world today, which can provide a useful roadmap for how it might actually change the world tomorrow. 1. Helping people make better decisions In our data-saturated digital-first world, AI is helping people make better decisions. It can sift through billions of data points, synthesizing key insights and equipping people to make pivotal decisions. In this way, AI is a tool to empower people, sharpening their intuition and enabling them to make more informed decisions. It’s not a substitute for human discernment but a powerful aid that enhances our ability to make better real-world choices. Take the security industry as an example. It’s difficult to find a physical space that isn’t monitored by at least one camera. The Bureau of Labor Statistics reports that installed surveillance cameras grew by 50% between 2015 and 2018, exceeding 70 million cameras by the decade’s end. Monitoring the footage is an entirely different story, and tasking humans with watching endless uninteresting camera feeds is a recipe for boredom and inattention. That’s where AI steps in, acting as a vigilant and tireless sentinel. By filtering out the mundane, such as rustling trees or passing cars, AI zeroes in on the most pertinent information. It’s a whiz at detecting anomalies, like unexpected activity in a deserted parking lot. Even so, there is a meaningful difference between identification and discernment. This is where human judgment swoops in. Security personnel or other trained experts can review the camera footage and assess and address accordingly any suspicious behavior. In this tandem of AI and human expertise, we witness the birth of a comprehensive and effective security system, reaping the benefits of both worlds. Simply put, AI is a catalyst, powering more efficient and effective decision-making across myriad domains. From interpreting earnings reports to bolstering security intelligence,...
48
views
Build a Winning AI Strategy for Your Business - HBR.org Daily
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Build a Winning AI Strategy for Your Business - HBR.org Daily
Artificial intelligence is a kind of catalyst; it’s the next wave of truly transformative technology with potential we cannot yet fully envision or appreciate. Companies will start by using this new technology to do “old things” before discovering the new opportunities it creates. So, how should they go about this process? They should: start by experimenting, deploy for productivity, transform experiences, and then try to build new things. Throughout this process, they should prioritize security and responsible use. Recently, like millions of people, I used a ride-sharing app on my smartphone. It was pretty uneventful and not something I gave much thought. Ride-sharing is simple and convenient, and it’s now an $80+ billion industry. But it wasn’t that long ago that it didn’t even exist. We had cars, we had riders, and we had drivers; but to work, ride-sharing needed smartphones. When they arrived, so did an enormous variety of conveniences and new experiences — some that became entire industries — that we never could have imagined. Artificial intelligence is a similar kind of catalyst; it’s the next wave of truly transformative technology with potential we cannot yet fully envision or appreciate. It is the defining technology of our time, changing the way we live and work. In my entire career in tech, I’ve never been more excited and optimistic than I am now. I have a colleague at Microsoft who talks about AI like this: You’ve got to use the “new thing” to do old things better. Then, you use the new thing to … do new things. He’s right. Consider an example from health care. Paige is a software company using AI to change the way doctors identify, diagnose, and treat cancers. With properly trained and tuned models, AI can look at thousands of digital pathology images, pixel by pixel, and detect abnormalities faster and with more accuracy. Imagine what these tools can unlock not only for pathologists and doctors, but for patients, too. It means earlier disease detection, healthier lives, and more time with loved ones. Right now every company, no matter the size or industry, should be thinking about AI. AI is moving from its auto-pilot phase, which was all about narrow, purpose-built tools that use machine learning models to make predictions, recommendations, and automate, to its copilot phase, where there’s tremendous opportunity to revolutionize how just about everything gets done. Leaders who embrace AI now and take action to understand it, experiment with it, and envision how it can solve hard problems are going to run companies that thrive in an AI world. But where should they start? Nearly every day, I talk with business leaders who ask important questions about AI’s potential. No matter where you are in your AI journey, it’s incumbent upon every leader to embrace this unique time and take advantage of this powerful technology. If you feel unsure how to start, or how to move forward, you’re not alone. Like any business-planning exercise, think about your AI strategy in phases. Embrace agility and change, and keep a continuous learning mindset, calibrating and adjusting your gameplan as you go. Start by Experimenting The best way to learn about AI is to use it. It’s rare for new and disruptive technology to be immediately accessible. This is. Most of the leaders I talk with have tried popular AI applications like ChatGPT or the new Bing. There are many other options out there, but the point is to get curious. Try applying it to whatever task is in front of you and see what it’s good at and what it’s not. Use it to generate interview questions, write a memo, research and summarize a topic you want to learn more about, or get thought starters for a document. I used Bing and ChatGPT to help me get ideas for a speech. I’ve used Microsoft 365 Copilot, the AI integration across Microsoft apps to generate slides, to find and summarize documents that share a topic, and to recap email exchanges with colleagues. By using and experimenting with AI, you’ll be in a better position to imagine how it could be used in your organization — and you likely know better than anyone where opportunities and potential exist. Deploy for Productivity When it comes to productivity, AI copilots — from Microsoft and from others — can be deployed or embedded in applications t...
15
views
The Black Mirror plot about AI that worries actors - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The Black Mirror plot about AI that worries actors - BBC
Image source, Alamy Image caption, Salma Hayek discovers she signed away the rights to her AI likeness in a recent episode of Black Mirror By Shiona McCallum Technology reporter Hollywood actors are striking for the first time in 43 years, bringing the American movie and television business to a halt, partly over fears about the impact of artificial intelligence (AI). The Screen Actors Guild (SAG-AFTRA) actors' union failed to reach an agreement in the US for better protections against AI for its members - and warned that "artificial intelligence poses an existential threat to creative professions" as it prepared to dig in over the issue. Duncan Crabtree-Ireland, the chief negotiator for the SAG-AFTRA union, criticised producers for their proposals over AI so far. He said studios had asked for the ability to scan the faces of background artists for the payment of one day's work, and then be able to own and use their likeness "for the rest of eternity, in any project they want, with no consent and no compensation". If that sounds like the plot of an episode of Charlie Brooker's Black Mirror, that's because it is. US media has been quick to point out that the recent series six episode "Joan Is Awful" sees Hollywood star Salma Hayek grapple with the discovery that her AI likeness can by used by a production company without her knowledge. Image source, Getty Images Image caption, Harrison Ford was de-aged using computer technology, including machine learning, in the most recent Indiana Jones film And it's not just SAG-AFTRA who are concerned about so-called "performance cloning". Liam Budd, of UK acting union Equity, said: "We're seeing this technology used in a range of things like automated audiobooks, synthesised voiceover work, digital avatars for corporate videos, or also the role of deepfakes that are being used in films." Mr Budd said that there was "fear circulating" amongst the Equity members and the union was trying to educate them on understanding their rights in this fast-evolving world. Film-maker and writer Justine Bateman, speaking to the BBC's Tech Life earlier this year, said that she did not think the entertainment industry needed AI at all. "Tech should solve a problem and there's no problem that those using AI solves. We don't have a lack of writers, we don't have a lack of actors, we don't have a lack of film-makers - so we don't need AI," she said. "The problem it solves is for the corporations that feel they don't have wide enough profit margins - because if you can eliminate the overhead of having to pay everyone you can appease Wall Street and have greater earnings reports. "If AI use proliferates, the entertainment industry it will crater the entire structure of this business." Perhaps it is only a question of time before ChatGPT or Bard can conjure up an innovative movie script or turn an idea into a blockbuster screenplay. Media caption, Watch: Brian Cox: 'I am concerned about artificial intelligence' Some say AI will always lack the humanity that makes a film script great, but there are legitimate concerns that it will put writers out of a job. The Writers' Guild of Great Britain (WGGB) - a trade union representing writers for TV, film, theatre, books and video games in the UK - has several concerns, including: AI developers are using writers' work without their permission and infringing writers' copyright AI tools do not properly identify where AI has been used to create content Increased AI use will lead to fewer job opportunities for writers The use of AI will suppress writers' pay AI will dilute the contributions made by the creative industry to the UK economy and national identity. The WGGB has made a number or recommendations to help protect writers, including AI developers only using writers' work if they have been given express permission and AI developers being transparent about what data is being used to train their tools. WGGB deputy general secretary Lesley Gannon said, "As with any new technology we need to weigh the risks against the benefits and ensure that the speed of development does not outpace or derail the protections that writers and the wider creative workforce rely upon to make a living. "Regulation is clearly needed to safeguard workers' rights, and protect audiences from fraud and misinformation." Medi...
157
views
Pinecone leads 'explosion' in vector databases for generative AI - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Pinecone leads 'explosion' in vector databases for generative AI - VentureBeat
July 14, 2023 9:12 AM Bob Wiederhold, Pinecone COO, right, speaks with investor Tim Tully, at VB Transform on Wednesday Image Credit: Michael O'Donnell Head over to our on-demand library to view sessions from VB Transform 2023. Register Here Vector databases, a relatively new type of database that can store and query unstructured data such as images, text and video, are gaining popularity among developers and enterprises who want to build generative AI applications such as chatbots, recommendation systems and content creation. One of the leading providers of vector database technology is Pinecone, a startup founded in 2019 that has raised $138 million and is valued at $750 million. The company said Thursday it has “way more than 100,000 free users and more than 4,000 paying customers,” reflecting an explosion of adoption by developers from small companies as well as enterprises that Pinecone said are experimenting like crazy with new applications. By contrast, the company said that in December it had fewer than in the low thousands of free users, and fewer than 300 paying customers. Pinecone held a user conference on Thursday in San Francisco, where it showcased some of its success stories and announced a partnership with Microsoft Azure to speed up generative AI applications for Azure customers. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now Follow all our VentureBeat Transform 2023 coverage Bob Wiederhold, the president and COO of Pinecone, said in his keynote talk at VB Transform that generative AI is a new platform that has eclipsed the internet platform and that vector databases are a key part of the solution to enable it. He said the generative AI platform is going to be even bigger than the internet, and “is going to have the same and probably even bigger impacts on the world.” Vector databases: a distinct type of database for the generative AI era Wiederhold explained that vector databases allow developers to access domain-specific information that is not available on the internet or in traditional databases, and to update it in real time. This way, they can provide better context and accuracy for generative AI models such as ChatGPT or GPT-4, which are often trained on outdated or incomplete data scraped from the web. Vector databases allow you to do semantic search, which is a way to convert any kind of data into vectors that allow you to do “nearest neighbor” search. You can use this information to enrich the context window of the prompts. This way, “you will have far fewer hallucinations, and you will allow these fantastic chatbot technologies to answer your questions correctly, more often,” Wiederhold said. Wiederhold’s remarks came after he spoke Wednesday at VB Transform, where he explained to enterprise executives how generative AI is changing the nature of the database, and why at least 30 vector database competitors have popped up to serve the market. See his interview below. Bob Wiederhold, COO of Pinecone, right, speaks with investor Tim Tully of Menlo Ventures at VB Transform on Wednesday Wiederhold said that large language models (LLMs) and vector databases are the two key technologies for generative AI. Whenever new data types and access patterns appear, assuming the market is large enough, a new subset of the database market forms, he said. That happened with relational databases and no-SQL databases, and that’s happening with vector databases, he said. Vectors are a very different way to represent data, and nearest neighbor search is a very different way to access data, he said. He explained that vector databases have a more efficient way of partitioning data based on this new paradigm, and so are filling a void that other databases, such as relational and no-SQL databases, are unable to fill. He added that Pinecone has built its technology from scratch, without compromising on performance, scalability or cost. He said that only by building from scratch can you have the lowest latency, the highest ingestion speeds and the lowest cost of implementing use cases. He also said that the winner database providers are going to be the ones that have built the best managed services for the cloud...
181
views
Alphabet shares soar after it expands AI chatbot internationally - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Alphabet shares soar after it expands AI chatbot internationally - Reuters
July 13 (Reuters) - Shares in Google parent Alphabet Inc (GOOGL.O) were up 4.9% on Thursday after it said it was rolling out its artificial-intelligence chatbot Bard in Europe and Brazil, easing worries about overseas regulatory issues. The stock last traded at $124.73 and was on track for its biggest one-day percentage gain since early February when it announced the product. The shares also hit their highest point since mid-June during the session. Alphabet shares were outperforming the broader market, with the SP 500 (.SPX) up 0.6%, boosted by data showing signs of cooling inflation. Bard's launch in the European Union had been held up until now by local privacy regulators but Google said it had met watchdogs to reassure them on issues relating to transparency, choice and control. Danni Hewson, head of financial analysis at investment firm AJ Bell attributed Thursday's rally to the launch in Europe and in Brazil and Bard's expansion into new languages. "There were some concerns about data, about privacy. Clearly they've been able to reassure European regulators about those issues, which just paves the way for further advantage really," said Hewson. Art Hogan, chief market strategist at B Riley Wealth also attributed Thursday's rally to Bard's Europe and Brazil launch, which he said "marks the product's most significant expansion since its February launch and pits it against Microsoft Corp." Microsoft, (MSFT.O) the backer of rival AI ChatGPT, was up 1.1% on Thursday. Alphabet shares, which have seen a huge boost from investor excitement around generative artificial intelligence since February, are up around 41% so far this year. Microsoft shares are up 42% so far in 2023. Also on Thursday, TD Cowen raised its price target for Alphabet shares to $140 from $130 citing expectations of better growth in its search business. Reporting by Bansari Mayur Kamdar in Bengaluru; writing by Sinéad Carew in New York; Editing by Conor Humphries Our Standards: The Thomson Reuters Trust Principles. Bansari reports on the global financial markets and writes Reuters' daily flagship market reports on equities, bonds and currencies. An economist by training and winner of the Arthur MacEwan Award for Excellence in Political Economy, she has written for renowned global papers and magazines including The Diplomat, Boston Globe, Conversation, Huff...
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
11
views
Stability AI Cofounder Says Emad Mostaque Tricked Him Into Selling Stake For $100 - Forbes
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Stability AI Cofounder Says Emad Mostaque Tricked Him Into Selling Stake For $100 - Forbes
Stability CEO Emad Mostaque at an event. 2022 Bloomberg Finance LP A second person claiming to be a cofounder of the generative AI startup Stability AI has taken the company and its CEO Emad Mostaque to court in a lawsuit that contains allegations of fraud and embezzlement. Cyrus Hodes, an artificial intelligence entrepreneur, filed the lawsuit on Thursday to the U.S. District Court for the Northern District of California. It alleges that he was misled by Mostaque into believing his 15% stake in the business was “worthless” just months before Stability AI’s August 2022 fundraise of more than $100 million at a $1 billion valuation from venture capital firms Coatue and Lightspeed. The company, which is best known for its association with the viral text-to-image generation system Stable Diffusion, has since emerged as one of the leading public faces of the generative AI boom. In the suit, Hodes says he sold his stake to Mostaque for $100 across two transactions in October 2021 and May 2022. In doing so, he alleges he was “fraudulently cheated” out of equity that within months would have been worth $150 million on an undiluted basis because Mostaque withheld information about active fundraising attempts and a planned business pivot into AI text-to-image generation. The suit further accuses Mostaque of destroying evidence after being notified by Hodes’ counsel of potential legal action in December 2022. Mostaque wrote in a March 2023 Twitter post that his “WhatsApp got deleted,” which the complaint alleges to have been intentional. Avi Weitzman, a partner at law firm Paul Hastings who is representing Hodes, described the maneuver as a “shocking swindling” in a statement shared with Forbes. “This was corporate greed at its worst. We look forward to a full airing of Defendants’ wrongdoing in Court,” he wrote. Stability AI was unable to provide comment at the time of publication. MORE FROM FORBES The AI Founder Taking Credit For Stable Diffusion's Success Has A History Of Exaggeration By Kenrick Cai The lawsuit also claims that Mostaque “embezzled funds” from external investors to pay the rent for his family’s London apartment and his children’s schooling. It cites an unnamed “former investor in one of Mostaque’s businesses” who disclosed to Weitzman that Mostaque “has a pattern of bamboozling investors and misappropriating investor and company assets for personal use.” The suit further alleges that Mostaque himself acknowledged to Hodes in 2021 “that he had improperly used company funds” — obtained from the Trinity Challenge, an organization which provides grants to health-related projects — for personal use. Forbes previously reported that several Stability AI employees had voiced concerns about tens of thousands of dollars being transferred from Stability AI’s corporate accounts to the personal bank account of Mostaque’s wife Zehra Qureshi, the company’s former head of PR. Motez Bishara, a spokesperson for the company, said at the time the couple had made loans to the business and sums owed to, or by, the couple were settled by the end of 2022.
Hodes was an early collaborator with Mostaque on a business idea of using artificial intelligence to help governmental agencies address the Covid-19 pandemic. He appears alongside Mostaque in a virtual event for the project’s launch on YouTube. But the business failed to take off and was scrapped the following year. “Lots of people promised a lot and they didn’t come through,” Mostaque previously told Forbes. Hodes’ lawsuit alleges that a key reason for the project’s failure was that Mostaque was “secretly diverting” his attention and company resources to Stability’s AI image generation efforts.
The case has similarities to another lawsuit leveled against the startup best known for its association with the viral text-to-image generation system Stable Diffusion. In May, Tayab Waseem claimed that Mostaque reneged on an agreement to grant him a 10% cofounder stake in the company. That suit, which was first reported by Vice, was voluntarily dismissed by Waseem on the same day. (Stability AI previously did not comment on Waseem’s suit.)
Waseem’s complaint also portrays Hodes as a cofounder of Stability. It includes a slide from a company investment pitch deck which states that “Emad [Most...
23
views
Hollywood studios proposed AI contract that would give them likeness rights ‘for the rest of et...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Hollywood studios proposed AI contract that would give them likeness rights ‘for the rest of eternity’ - The Verge
During today’s press conference in which Hollywood actors confirmed that they were going on strike, Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, revealed a proposal from Hollywood studios that sounds ripped right out of a Black Mirror episode. In a statement about the strike, the Alliance of Motion Picture and Television Producers (AMPTP) said that its proposal included “a groundbreaking AI proposal that protects actors’ digital likenesses for SAG-AFTRA members.” When asked about the proposal during the press conference, Crabtree-Ireland said that “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.” The use of generative AI has been one of the major sticking points in negotiations between the two sides (it’s also a major issue behind the writers strike), and in her opening statement of the press conference, SAG-AFTRA president Fran Drescher said that “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.” The SAG-AFTRA strike will officially commence at midnight tonight. Disclosure:The Verge’s editorial staff is also unionized with the Writers Guild of America, East.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
31
views
Google Calendar Extension 'Reclaim AI' Goes Viral on TikTok - Entrepreneur
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google Calendar Extension 'Reclaim AI' Goes Viral on TikTok - Entrepreneur
If you think life would be easier with a personal assistant, one TikToker may have cracked the code on how to get a virtual one for practically free. TikToker Izzy Mignone is going viral for a clip where she shows an AI extension for Google Calendar called Reclaim AI that helps schedule (and reschedule) your calendar based on events you input and its time restrictions, whether recurring weekly appointments or one-time events (like a party). @izzym.reviews The best organization hack ever- let ai do all the work for you. Now all i need is a calendar app with a notion-style notes feature so I can have everything in one #productivity #organizationhacks #organizationtiktok #calendarorganization #googlecalendar #googlecalendartips original sound - izzy mignone "I think it is like the organization hack that my ADHD neurodivergent brain has been dreaming of," Mignone told viewers. "I can use natural language, and it'll pull out the data from whatever I type in. It's like having your own personal assistant." Once the tool collects the data, it will set off blocks of time and create a mock schedule for you that's directly integrated into your Google calendar. The clip, which has been viewed over 339,600 times, stunned viewers who were in disbelief that such a plugin existed. "Thank you!!! I needed something like this," one user exclaimed. "And I didn't even know!!!!!!" Google's official TikTok account even commented on Mignone's post. "Such a smart idea," the company wrote. Mignone clarified in the comment section that the version she uses has a fee. According to Reclaim AI's website, the app ranges from a free version that can plan out three weeks at a time for one user to an $18 version that can have 100-plus users and plan out 12 weeks at a time.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
17
views
AI trend drives rise in students wanting to study computing - bbc.com
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI trend drives rise in students wanting to study computing - bbc.com
Image source, Getty Images By Shiona McCallum Chris Vallance Technology reporters School-leavers are choosing computing courses in record numbers, according to the Universities and Colleges Admissions Service (UCAS). This year's application data showed 18-year-olds were increasingly inspired to study computing "thanks to the rise of digital and AI", UCAS chief executive Clare Marchant said. Applications to study computing were up almost 10% compared to 2022. However, it was only the seventh most popular area of higher education study. While nearly 95,000 students applied for courses in computer and AI related courses, almost twice that number applied to study business and management. More than 125,000 applied for design, creative and performing arts courses. Subjects allied to medicine, social sciences, biological and sports sciences, and engineering and technology were all more popular than computing. However, the numbers applying for computer-related courses have risen every year since 2019, UCAS said. This year software engineering saw the steepest rise in applications, up 16% compared to last year. Computer science attracted 11% more applicants. There was a 2% rise in students applying to study computer games and animation, and 4% in artificial intelligence (AI). The increased interest in computing courses may in part be down to a growing public conversation around technology and artificial intelligence, Ms Marchant said. "We know that changes in the world around us translate into increased demand for certain courses, as we saw for economics post-2008, and for medicine and nursing during the Covid-19 pandemic," she said. Chris Derrick, deputy headteacher at Kelvinside Academy in Glasgow said pupils applying for computing courses now were all "digital natives" who have "honed and developed these skills from a young age using powerful tech every day". "Programming knowledge is also so accessible via YouTube and ChatGPT," he said. "Pupils can explore their passions and learn at pace. If they don't have an answer, Google and YouTube will," he said. While much of the public discussion recently has been around which jobs will be replaced by AI, there are also a growing number of employment opportunities related to AI, data science, software design and computing technologies. There was also an increase in the number of applications by UK 18-year-olds from the most disadvantaged backgrounds, UCAS said. However, computing remains a male-dominated subject, with only 18% of applications for computer-related studies coming from female students, up slightly, from 17% in 2022 and 16% in 2021. The total number of UK 18-year-old applicants was over 319,500, the second highest it has been, a slight decrease on last year. Rashik Parmar, chief executive of BCS, The Chartered Institute for IT said: "Teenagers in the UK know that AI will change the world forever; it shouldn't surprise us to see this soaring demand for computing degrees". Vanessa Wilson from the University Alliance - an association of British universities - agreed that greater public interest in AI in recent months might have contributed to more interest from applicants. "The rise in the popularity of computing may well be a response to increasing awareness of the role of technologies such as AI, as well as a strong desire from students to develop what they see as future-proof skills," she said.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
678
views
Indian developer fired 90 percent of tech support team, outsourced the job to AI - The Register
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Indian developer fired 90 percent of tech support team, outsourced the job to AI - The Register
Here's a story from the Department of Massive and Terrifying Irony: a startup Indian software developer struggled to afford its customer support team, so outsourced it – to an AI chatbot that was more efficient and cheaper.
The developer is called Dukaan and offers a platform it promises allows rapid deployment of online stores.
Founder Suumit Shah took to Twitter to reveal that the change to robo-service saw time to first response fall – from a minute and 44 seconds to zero. Resolution time plunged as well – from two hours and 13 minutes when humans were doing it, down to three minutes and 12 seconds with AI on the job. Overall customer support costs dropped by around 85 percent. Shah detailed how Dukaan struggled to hire people with the skills to work as support agents. "It's like – Lionel Messi doing a full time job at Decathlon, though the theory has some merit, but is ultimately flawed," he wrote. The founder explained his startup developed its own AI, and linked to Dukaan's AI lead Ojasvi Yadav who shared scant details of the build. Yadav wrote: "As an AI practitioner, I consider this a LLM-library equivalent of working with React devs on your company's own fork, when React was new. Or working with PostGres devs on your company's fork when it was in its initial phases."
Shah's tweets have not gone down well. He described laying off 90 percent of his support team as: “Tough? Yes. Necessary? Absolutely.” He then commented on the layoffs as follows: Weep, dear readers, for the founders who must now build businesses that are sustainable much earlier in their lives than is possible when cheap, optimistic, money is prevalent. Those poor startups must now worry about balancing their books – like almost every other business in history. The death of the sysadmin has been predicted for years – we're not holding our breath Bad times are just starting for India's IT outsourcers, says JP Morgan India's major IT outsourcers slow hiring and fret about deal pipelines Infosys founder slams working from home, side hustles, as slowing India's growth The AI story is also significant because India has for at least two decades been seen as a source of cheap IT talent that enabled tech companies to reduce their costs. Dukaan's story is a reminder that, while India's workers often still take home much less than IT pros elsewhere, their wages have risen – and AI perhaps represents an even cheaper way to undertake some work.
Yet Dukaan's case study doesn't seem terrifying. Chatbots have been a frontline support tool for over a decade – often deployed to steer customers to self-service options so that remaining staffers can handle the gnarliest inquiries that warrant more expensive human intervention. With Dukaan silent on the details of its rig, there's a chance it's not that radical an AI assault.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
16
views
AI Won't Really Kill Us All, Will It? - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI Won't Really Kill Us All, Will It? - The Atlantic
For months, more than a thousand researchers and technology experts involved in creating artificial intelligence have been warning us that they’ve created something that may be dangerous, something that might eventually lead humanity to become extinct. In this Radio Atlantic episode, The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel talk about how seriously we should take these warnings, and what else we might consider worrying about. Listen to the conversation here: Subscribe here: Apple Podcasts Spotify Stitcher Google Podcasts Pocket Casts The following transcript has been edited for clarity. Hanna Rosin: I remember when I was a little kid being alone in my room one night watching this movie called The Day After. It was about nuclear war, and for some absurd reason, it was airing on regular network TV. The Day After: Denise: It smells so bad down here. I can’t even breathe! Denise’s mom: Get ahold of yourself, Denise. Rosin: I particularly remember a scene where a character named Denise—my best friend’s name was Denise—runs panicked out of her family’s nuclear-fallout shelter. The Day After: Denise: Let go of me. I can’t see! Mom: You can’t go! Don’t go up there! Brother: Wait a minute! Rosin: It was definitely, you know, “extra.” Also, to teenage me, genuinely terrifying. It was a very particular blend of scary ridiculousness I hadn’t experienced since—until a couple of weeks ago, when someone sent me a link to this YouTube video with Paul Christiano, who is an artificial intelligence researcher. Paul Christiano: The most likely way we die is not that AI comes out of the blue and kills us, but involves that we’ve deployed AI everywhere. And if, God forbid, they were trying to kill us, they would definitely kill us. Rosin: Christiano was talking on this podcast called Bankless. And then I started to notice other major AI researchers saying similar things: Norah O’Donnell on CBS News: More than 1,300 tech scientists, leaders, researchers, and others are now asking for a pause. Bret Baier on Fox News: Top story right out of a science-fiction movie. Rodolfo Ocampo on 7NEWS Australia: Now it’s permeating the cognitive space. Before, it was more the mechanical space. Michael Usher on 7NEWS Australia: There needs to be at least a six-month stop on the training of these systems. Fox News: Contemporary AI systems are now being human-competitive. Yoshua Bengio talking with Tom Bilyeu: We have to get our act together. Eliezer Yudkowsky on the Bankless podcast: We’re hearing the last winds begin to blow, the fabric of reality start to fray. Rosin: And I’m thinking, Is this another campy Denise moment? Am I terrified? Is it funny? I can’t really tell, but I do suspect that the very “doomiest” stuff at least is a distraction. There are likely some actual dangers with AI that are less flashy but maybe equally life-altering. So today we’re talking to The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel, who’ve been researching and tracking AI for some time. ___ Rosin: Charlie, Adrienne—when these experts are saying, “Worry about the extinction of humanity,” what are they actually talking about? Adrienne LaFrance: Let’s game out the existential doom, for sure. [Laughter.] Rosin: Thanks! LaFrance: When people warn about the extinction of humanity at the hands of AI, that’s literally what they mean—that all humans will be killed by the machines. It sounds very sci-fi. But the nature of the threat is that you imagine a world where more and more we rely on artificial intelligence to complete tasks or make judgments that previously were reserved for humans. Obviously, humans are flawed. The fear assumes a moment at which AI’s cognitive abilities eclipse our species—and so all of a sudden, AI is really in charge of the biggest and most consequential decisions that humans make. You can imagine they’re making decisions in wartime about when to deploy nuclear weapons—and you could very easily imagine how that could go sideways. Rosin: Wait; but I can’t very easily imagine how that would go sideways. First of all, wouldn’t a human put in many checks before you would give access to a machine? LaFrance: Well, one would hope. But one example would be that you give the AI the imperative to “Win this war, no matter what...
132
views
Google’s Bard AI chatbot is now available in the EU - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google’s Bard AI chatbot is now available in the EU - The Verge
Google is adding some new features to its Bard AI chatbot, including the ability for Bard to speak its answers to you and for it to respond to prompts that also include images. The chatbot is also now available in much of the world, including the EU. In a blog post, Google is positioning Bard’s spoken responses as a helpful way to “correct pronunciation of a word or listen to a poem or script.” You’ll be able to hear spoken responses by entering a prompt and selecting the sound icon. Spoken responses will be available in more than 40 languages and are live now, according to Google. The feature that lets you add images to prompts is something that Google first showed off at its I/O conference in May. In one example, Google suggested you could use this to ask for help writing a funny caption about a picture of two dogs. Google says the feature is now available in English and is expanding to new languages “soon.” Google is introducing a few other new features, too, including the ability to pin and rename conversations, share responses with your friends, and change the tone and style of the responses you get back from Bard. Google first opened up access to Bard in March, but at the time, it was available only in the US and the UK. The company has been rolling out the chatbot to many more countries since then, and that now includes “all countries in the EEA [European Economic Area] and Brazil,” Google spokesperson Jennifer Rodstrom tells The Verge. That expansion in Europe is a notable milestone; the company’s planned Bard launch in the EU was delayed due to privacy concerns.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
21
views
Anthropic’s Claude Is Competing With ChatGPT. Even Its Builders Fear AI. - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Anthropic’s Claude Is Competing With ChatGPT. Even Its Builders Fear AI. - The New York Times
It’s a few weeks before the release of Claude, a new A.I. chatbot from the artificial intelligence start-up Anthropic, and the nervous energy inside the company’s San Francisco headquarters could power a rocket. At long cafeteria tables dotted with Spindrift cans and chessboards, harried-looking engineers are putting the finishing touches on Claude’s new, ChatGPT-style interface, code-named Project Hatch. Nearby, another group is discussing problems that could arise on launch day. (What if a surge of new users overpowers the company’s servers? What if Claude accidentally threatens or harasses people, creating a Bing-style P.R. headache?) Down the hall, in a glass-walled conference room, Anthropic’s chief executive, Dario Amodei, is going over his own mental list of potential disasters. “My worry is always, is the model going to do something terrible that we didn’t pick up on?” he says. Despite its small size — just 160 employees — and its low profile, Anthropic is one of the world’s leading A.I. research labs, and a formidable rival to giants like Google and Meta. It has raised more than $1 billion from investors including Google and Salesforce, and at first glance, its tense vibes might seem no different from those at any other start-up gearing up for a big launch. But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things. Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us. “Some of us think that A.G.I. — in the sense of systems that are genuinely as capable as a college-educated person — are maybe five to 10 years away,” said Jared Kaplan, Anthropic’s chief scientist. Just a few years ago, worrying about an A.I. uprising was considered a fringe idea, and one many experts dismissed as wildly unrealistic, given how far the technology was from human intelligence. (One A.I. researcher memorably compared worrying about killer robots to worrying about “overpopulation on Mars.”) But A.I. panic is having a moment right now. Since ChatGPT’s splashy debut last year, tech leaders and A.I. experts have been warning that large language models — the A.I. systems that power chatbots like ChatGPT, Bard and Claude — are getting too powerful. Regulators are racing to clamp down on the industry, and hundreds of A.I. experts recently signed an open letter comparing A.I. to pandemics and nuclear weapons. At Anthropic, the doom factor is turned up to 11. A few months ago, after I had a scary run-in with an A.I. chatbot, the company invited me to embed inside its headquarters as it geared up to release the new version of Claude, Claude 2. I spent weeks interviewing Anthropic executives, talking to engineers and researchers, and sitting in on meetings with product teams ahead of Claude 2’s launch. And while I initially thought I might be shown a sunny, optimistic vision of A.I.’s potential — a world where polite chatbots tutor students, make office workers more productive and help scientists cure diseases — I soon learned that rose-colored glasses weren’t Anthropic’s thing. They were more interested in scaring me. In a series of long, candid conversations, Anthropic employees told me about the harms they worried future A.I. systems could unleash, and some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history. (“The Making of the Atomic Bomb,” a 1986 history of the Manhattan Project, is a popular book among the company’s employees.) Not every conversation I had at Anthropic revolved around existential risk. But dread was a dominant theme. At times, I felt like a food writer who was assigned to cover a trendy new restaurant, only to discover that the kitchen staff wanted to talk about nothing but food poi...
61
views
'Mission: Impossible—Dead Reckoning' Is the Perfect AI Panic Movie - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
'Mission: Impossible—Dead Reckoning' Is the Perfect AI Panic Movie - WIRED
American action movie villains have always acted as a sort of paranoia litmus test, capturing a snapshot of the particular anxieties plaguing the country and its citizens at any given time. During the Cold War, movies like From Russia with Love, Rocky IV, and Red Dawn nodded at the public’s fear of wily Soviets, ostensibly hell-bent on ruining the capitalist way of life. In the 1990s and ’00s, with the Red Menace long forgotten, movies leaned heavily on the awful “bad Arab” trope, pulling their villains from the Middle East. Other recent smash-’em-ups have made bad guys out of rogue spies, shadowy cyber terrorists, and self-interested arms dealers, all common players in the global news landscape. But for Mission: Impossible—Dead Reckoning Part One, out this week, writers Bruce Geller, Erik Jendresen, and Christopher McQuarrie (who also directed the movie) made their big bad—known as The Entity—out of a slightly more amorphous fear: that of an all-powerful, all-seeing, sentient AI. It has access to anything with an online network and can use those evil techno powers to manipulate everything from global military superpowers to a grandma with a gun. It’s everywhere and nowhere at once, and although the movie uses Esai Morales’ Gabriel as The Entity’s henchman, he’s a mere mortal—albeit one with access to all the information and decision-making logic the world’s strongest supercomputer has to offer. While the “man vs. machine” trope is nothing new, the idea of a sentient AI coming to take over humanity feels especially prescient and pressing in 2023, when ChatGPT is writing term papers and companies are tasking AI-imbued bots with everything from listicles to tech support. The looming threat of AI-generated content is a big sticking point for the striking members of the Writers Guild of America too, with many wanting to ensure that any new contract they sign includes provisions for how—or whether—studios can use the technology to create scripts. Of course, Dead Reckoning was written years ago. Part One was originally scheduled for release in the summer of 2021, before Covid-19 threw a wrench in the movie’s production calendar. McQuarrie and company simply stumbled into great timing with the movie’s current release date, which comes about six months into America’s newfound obsession with generative AI’s perks and perils and just weeks after Senate Majority Leader Chuck Schumer announced a major congressional push toward AI regulation. Fears of AI’s inevitable takeover are hot right now, even if (or because) the vast majority of Americans don’t know the first thing about how it could actually happen. Perhaps that’s why The Entity works as a villain, even if the way the movie personifies it with swooshing graphics and eye-like optics is a little hokey. Most moviegoers have only had brief dalliances with AI, perhaps through a few minutes spent probing ChatGPT or some backyard BBQ conversation about how Bing’s chatbot went rogue and encouraged a New York Times reporter to leave his wife. There are gaps and technological leaps in how The Entity operates—and a convenient-ish kill switch in a sunken submarine buried under Arctic ice—but none of that really matters if you’re just a schmo looking for something new and mysterious to fear. What’s more, AI is a fairly innocuous foe. In an era when action movies can’t just craft a villain from some othered nationality, ethnic group, or fringe political organization, a sentient and speciously evil computer will likely only offend the most adamant of AI defenders, a significant portion of whom already concede that the technology could cause humanity’s extinction. Dead Reckoning—Part One has to become a global box office smash to make its $290 million budget back, and having a faceless foe that basically the whole world can spit at is certainly a step in the right direction. Maybe Mission: Impossible’s Entity is just the harbinger of the future for action movie baddies. Both Heart of Stone and The Creator —which drop in August and September, respectively—feature AI foes hell-bent on global destruction. Humanity will no doubt prevail and endure in both those and Dead Reckoning—at their core, action movies are feel-good romps, after all—but in the meantime, millions of moviegoers can come together, bonded b...
261
views
What Are the Best AI-Generated Memes? - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
What Are the Best AI-Generated Memes? - The New York Times
It may or may not alter the course of humanity, but at least the memes are fun. By Max Read July 11, 2023 The A.I. revolution has arrived. After decades of research and countless dead ends, machine learning applications have reached a power and capability once thought unimaginable: Anyone on the planet can now fire up a computer and make an MP3 in which a synthetic voice modeled on President Barack Obama’s says, “My fellow Americans, I am now a catboy.” At least, that’s how Arik Ahmed has been using A.I. Mr. Ahmed is the creator of a series of wildly popular scripted videos in which the A.I.-generated voices of the three most recent U.S. presidents chatter as they play rounds of the popular first-person shooter game Overwatch. “I thought it’d be funny,” Ahmed said of his creative inspiration. To make the voices for his videos, Mr. Ahmed uses an app called Prime Voice AI, the basic tier of which costs $5 per month. The process is shockingly simple: “I complete a script for my silly little videos, I make my quote-unquote characters say the lines with the A.I. tool, and then I edit it in Adobe Premiere,” Mr. Ahmed says. The result is an absurd and vulgar masterpiece of online content, a series of 45-second sketches in which near-perfect imitations of some of the world’s most recognizable voices trash-talk one another in impenetrable Overwatch jargon. Hearing an uncanny simulacrum of Donald J. Trump’s voice say, “That is so cap, Joe” — Generation Z slang for “you’re full of it” — was the moment I realized A.I. might be the greatest technology ever created for making extremely stupid jokes. Over the past year, Prime Voice AI and other so-called generative, or content-producing, A.I. apps like the image generator app Midjourney and the chatbot ChatGPT have opened up to public use. An urgent, prophetic tone has taken hold in the Twitter threads, Substack newsletters and hectoring newspaper columns through which the thought leaders of Silicon Valley speak to their audiences. Optimists cite scientific advances and other examples of human intelligence and machine intelligence augmenting each other, robots and people walking hand in hand toward the singularity. Critics point toward broken spam bots, mutant disinformation and Kafkaesque A.I. service interactions — human venality and dull machine competency joining forces to make the world confusing and shoddy. So, on one end, field-transforming progress; on the other, failed A.I. spam bots clogging Twitter with the message “I’m sorry, I cannot generate inappropriate or offensive content.” Perhaps instead we should imagine A.I. possibilities on a two-dimensional plot, where one axis runs from “machine stupidity” to “machine intelligence” and the other from “human stupidity to human intelligence.” And Mr. Ahmed’s videos? I’d put them in the lower right, the kind of stunning masterpiece you can produce when you combine cutting-edge artificial intelligence technology with advanced human stupidity. The lower-right and upper-left quadrants cover most of what the public has found so engaging about new generative A.I. apps. These quadrants promise neither spiritual transcendence nor existential doom. They are often enlightening and impressive, but also funny, pointless and gleefully stupid. They are what we might call — using the bowdlerized rendering of an unpublishable, extremely online idiom for “making dumb, purposeless jokes” — the Funposting Zone. Machine Intelligence, Human Stupidity Not just any A.I.-generated post deserves to be charted in the Funposting Zone. They are missing a key ingredient: the conceptual dementedness of average internet users. By contrast, observe a series of images posted to the funposting hot spot r/weirddalle, of bees “giving a press conference”: Or, elsewhere on Reddit, “Spider-Man from Ancient Rome”: As video-generating A.I. becomes more widespread, more extremely stupid videos will join extremely stupid images. The machines here are not quite as intelligent, as this disturbing video of “Will Smith eating spaghetti” suggests: Deeper into the quadrant, we can find creations even stupider and even more advanced. Near Mr. Ahmed in the lower right we might find a cluster of other creations that make similarly glorious and silly use of A.I. voice generators, like the series of videos in wh...
91
views
Google DeepMind CEO Demis Hassabis on ChatGPT, AI, LLMs, and more - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google DeepMind CEO Demis Hassabis on ChatGPT, AI, LLMs, and more - The Verge
Today, I’m talking to Demis Hassabis, the CEO of Google DeepMind, the newly created division of Google responsible for AI efforts across the company. Google DeepMind is the result of an internal merger: Google acquired Demis’ DeepMind startup in 2014 and ran it as a separate company inside its parent company, Alphabet, while Google itself had an AI team called Google Brain. Google has been showing off AI demos for years now, but with the explosion of ChatGPT and a renewed threat from Microsoft in search, Google and Alphabet CEO Sundar Pichai made the decision to bring DeepMind into Google itself earlier this year to create… Google DeepMind. What’s interesting is that Google Brain and DeepMind were not necessarily compatible or even focused on the same things: DeepMind was famous for applying AI to things like games and protein-folding simulations. The AI that beat world champions at Go, the ancient board game? That was DeepMind’s AlphaGo. Meanwhile, Google Brain was more focused on what’s come to be the familiar generative AI toolset: large language models for chatbots, editing features in Google Photos, and so on. This was a culture clash and a big structure decision with the goal of being more competitive and faster to market with AI products. And the competition isn’t just OpenAI and Microsoft — you might have seen a memo from a Google engineer floating around the web recently claiming that Google has no competitive moat in AI because open-source models running on commodity hardware are rapidly evolving and catching up to the tools run by the giants. Demis confirmed that the memo was real but said it was part of Google’s debate culture, and he disagreed with it because he has other ideas about where Google’s competitive edge might come into play. Of course, we also talked about AI risk and especially artificial general intelligence. Demis is not shy that his goal is building an AGI, and we talked through what risks and regulations should be in place and on what timeline. Demis recently signed onto a 22-word statement about AI risk with OpenAI’s Sam Altman and others that simply reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That’s pretty chill, but is that the real risk right now? Or is it just a distraction from other more tangible problems like AI replacing a bunch of labor in various creative industries? We also talked about the new kinds of labor AI is creating — armies of low-paid taskers classifying data in countries like Kenya and India in order to train AI systems. We just published a big feature on these taskers. I wanted to know if Demis thought these jobs were here to stay or just a temporary side effect of the AI boom. This one really hits all the Decoder high points: there’s the big idea of AI, a lot of problems that come with it, an infinite array of complicated decisions to be made, and of course, a gigantic org chart decision in the middle of it all. Demis and I got pretty in the weeds, and I still don’t think we covered it all, so we’ll have to have him back soon. Alright, Demis Hassabis, CEO of Google DeepMind. Here we go. This transcript has been lightly edited for length and clarity Demis Hassabis, you are the CEO of Google DeepMind. Welcome to Decoder. Thanks for having me. I don’t think we have ever had a more perfect Decoder guest. There’s a big idea in AI. It comes with challenges and problems, and then, with you in particular, there’s a gigantic org chart move and a set of high-stakes decisions to be made. I am thrilled that you are here. Glad to be here. Let’s start with Google DeepMind itself. Google DeepMind is a new part of Google that is constructed of two existing parts of Google. There was Google Brain, which was the AI team we were familiar with as we covered Google that was run by Jeff Dean. And there was DeepMind, which was your company that you founded. You sold it to Alphabet in 2014. You were outside of Google. It was run as a separate company inside that holding company Alphabet structure until just now. Start at the very beginning. Why were DeepMind and Google Brain separate to begin with? As you mentioned, we started DeepMind actually back in 2010, a long time ago now, especially in th...
78
views