Missed Out on the AI Rally? My Best AI Stock to Buy and Hold (Even Now) - The Motley Fool
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Missed Out on the AI Rally? My Best AI Stock to Buy and Hold (Even Now) - The Motley Fool
Are you kicking yourself for not buying Nvidia last fall? Did you miss out on ChatGPT mania this spring? Well, fear not. Artificial intelligence (AI) and the potential for companies to benefit from it is here to stay, as are the stocks behind it. In fact, we're still in the early stages of the AI stock era. For instance, even with all the hype surrounding AI technology at the moment, there's one company out there that remains significantly undervalued as it relates to its AI-connected potential. That stock is Tesla (TSLA -0.50%) and here's why (even now) is a great buy-and-hold candidate. Image source: Getty Images. Let's be clear: Teslaisan AI company Ask the general public and most who know of its existence would say Tesla is a vehicle manufacturer, first and foremost. They would be right in that regard because roughly $5 of every $6 Tesla generates in revenue comes from electric vehicle (EV) sales. But that fact describes the current situation at Tesla. What about the future? It's in those projections that Tesla's AI pipeline looks far more promising. Technologies such as full self-driving (FSD) and robo-taxis remain out of reach at the moment. Despite CEO Elon Musk's repeated assurances that FSD is right around the corner, challenges remain. But once FSD is up and running, it will utterly transform what it means to own a Tesla. In much the same way that owning an iPhone wouldn't be the same if there was no internet, owning a Tesla that can drive itself (and perhaps generate revenue for its owner) will be a game-changer for the company. It's this potential that explains part of why Cathie Wood and her Ark Invest investment firm have set a $2,000-by-2028 stock price target on Tesla. Wood understands that each vehicle sold today isn't just revenue on Tesla's income statement; it's another piece in an eventual platform that should help Tesla generate significant revenue from future services related to FSD or similar AI-supported services. What's more, the research and development (RD) Tesla is putting into FSD today could pay off in surprising ways. For example, Tesla recently signed deals with competitors like General Motors and Ford Motor Company , permitting owners of their non-Tesla vehicles to buy an adapter and use Tesla's existing (and vast) network of charging stations. A few years down the road, a similar process could play out with Tesla selling access to its FSD software to competitors in exchange for a fee. That would make the company's services segment -- currently less than 10% of overall revenue -- a far more important business. Rising production leads the way to AI growth To make Tesla's AI plans a reality, the company needs more of its vehicles on the road. And that's already happening. Tesla recently announced second-quarter production numbers that beat analysts' expectations: 480,000 vehicles were delivered to owners, up 85% from 259,000 a year ago. That figure easily topped the consensus Wall Street estimate of 445,000. The rising production numbers mean that Tesla will have plenty of its cars on the road in the coming years. So that when (notif) FSD comes to fruition, the company can roll out software updates at scale to millions of its vehicles. Granted, Tesla's AI advancements are incomplete, and regulatory challenges will crop up as the company gets closer to achieving FSD status. But the EV maker should have science on its side. Tesla claims its own research indicates that its autopilot technology is already statistically safer than human drivers. And a recent trip to Italy left me thinking that many roads (I'm looking at you, Rome) would be safer and less chaotic with the introduction of computer-aided driving. In the end, FSD will come to pass. Consider how legacy vehicle-safety technology -- like seat belts, airbags, and cruise control -- has now become commonplace. Once FSD crosses from the theoretical to the practical, Tesla will add a powerhouse AI business to its already impressive EV business. And that's why Tesla is one of the best AI stocks to buy and hold right now. Jake Lerch has positions in Ford Motor Company, Nvidia, and Tesla. The Motley Fool has positions in and recommends Nvidia and Tesla. The Motley Fool recommends General Motors and recommends the following options:...
109
views
1
comment
Feels like you missed the generative AI train? 5 steps for speeding ahead in 90 days - TechCrun...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Feels like you missed the generative AI train? 5 steps for speeding ahead in 90 days - TechCrunch
Will Poole is a six-time serial entrepreneur and global investor focused on creating massively scalable tech-forward solutions meeting the needs of the rising middle class in the Global South (Latin America, Africa, India, and Southeast Asia). More posts by this contributor I’ve been talking to founders across the Global South about generative AI (GAI) as often as I can since early 2023. The founders in our portfolio of 350+ companies are generative AI users, not creators. As with any other disruptive situation, these founders can be divided into three groups: Ahead of the Curve: companies that have already shipped something.
Fast Followers: watching and prototyping but have not shipped yet.
Late for the Train: don’t yet know how to get on the train/don’t have any resources to apply now. This article is for any founder who feels like they’re late for the train — or is all aboard, but not going fast enough.
Reviewing examples of all three groups will help founders know where they really stand. Those who are Ahead of the Curve had at least three things going for them: They saw the opportunity early, they had ready-made situations to which they could apply generative AI, and they had engineering talent available to get something prototyped and into production in a timely way.
One example is a farming e-commerce company that has already taken 30% out of its customer service costs by putting a farmer-lingo-capable chatbot in front of its customer service agents and expects to get savings to 50% over the next quarter or so.
A Fast Follower has prototyped means to cut costs and increase the speed of recruiting blue-collar workers by adding generative AI–driven steps to its interview and candidate engagement workflow. Because they have a complex workflow with high throughput, they must be careful about how quickly they deploy; initial testing is showing massive improvements in multiple dimensions. Here are five clear steps to move from being late for the train to speeding ahead in much less time than you’d think. Finally, a Late for the Train startup provides solutions for call centers and has done some initial evaluation and planning, but has not yet determined how/when to best add generative AI to its product roadmap, which is already stressed with demands from existing customers.
Here are five clear steps to move from being late for the train to speeding ahead in much less time than you’d think: Adopt a simple language so everyone can communicate clearly about this disruptive tech.
Get your entire team onboard at the high level (many of them may already be there without your knowledge).
Ensure that you are not letting cloud LLMs “hoover up” your data in ways that expose it to competitors or bad actors.
Establish a Red Team to be disruptive internally.
Measure progress on generative AI adoption and communicate it to the company on a consistent basis. 1. Type 1 and Type 2 generative AI applications
There are plenty of new technical words and concepts around AI, and many have written about them, so you don’t need more from me, except this one concept: From an adoption perspective, there are broadly two paths you can be going down, which are not in any way exclusive.
The first is using generative AI to enhance what you’re already doing by increasing productivity or quality of operations or existing customer interactions. Let’s call this a Type 1 application.
The Ahead of the Curve example cited above is Type 1: Companies using generative AI to improve sales communications or help with market research are doing Type 1 work. Type 1 projects can be implemented on an individual or departmental level. And most importantly, they are table stakes for every startup these days — must-do activities. If you want to get funded and can’t show clear adoption of Type 1 applications, you’re in trouble. But Type 1 initiatives alone will not make you an AI company from a VC perspective.
Type 2 efforts are bigger, riskier, and much more important to your survival and to your ability to attract capital. With Type 2, you are looking to create entirely new ways of approaching a vital aspect of your business, or potentially your entire business, building on generative AI.
The upside from Type 1 is a reduction in cost and increased speed/produc...
27
views
Meta Ran a Giant Experiment in Governance. Now It's Turning to AI - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meta Ran a Giant Experiment in Governance. Now It's Turning to AI - WIRED
Late last month, Meta quietly announced the results of an ambitious, near-global deliberative “democratic” process to inform decisions around the company’s responsibility for the metaverse it is creating. This was not an ordinary corporate exercise. It involved over 6,000 people who were chosen to be demographically representative across 32 countries and 19 languages. The participants spent many hours in conversation in small online group sessions and got to hear from non-Meta experts about the issues under discussion. Eighty-two percent of the participants said that they would recommend this format as a way for the company to make decisions in the future. Meta has now publicly committed to running a similar process for generative AI, a move that aligns with the huge burst of interest in democratic innovation for governing or guiding AI systems. In doing so, Meta joins Google, DeepMind, OpenAI, Anthropic, and other organizations that are starting to explore approaches based on the kind of deliberative democracy that I and others have been advocating for. (Disclosure: I am on the application advisory committee for the OpenAI Democratic inputs to AI grant.) Having seen the inside of Meta’s process, I am excited about this as a valuable proof of concept for transnational democratic governance. But for such a process to truly be democratic, participants would need greater power and agency, and the process itself would need to be more public and transparent. I first got to know several of the employees responsible for setting up Meta’s Community Forums (as these processes came to be called) in the spring of 2019 during a more traditional external consultation with the company to determine its policy on “manipulated media.” I had been writing and speaking about the potential risks of what is now called generative AI and was asked (alongside other experts) to provide input on the kind of policies Meta should develop to address issues such as misinformation that could be exacerbated by the technology. At around the same time, I first learned about representative deliberations—an approach to democratic decisionmaking that has taken off like wildfire, with increasingly high-profile citizen assemblies and deliberative polls all over the world. The basic idea is that governments bring difficult policy questions back to the public to decide. Instead of a referendum or elections, a representative microcosm of the public is selected via lottery. That group is brought together for days or even weeks (with compensation) to learn from experts, stakeholders, and each other before coming to a final set of recommendations. Representative deliberations provided a potential solution to a dilemma I had been wrestling with for a long time: how to make decisions about technologies that impactpeople across national boundaries. I began advocating for companies to pilot these processes to help make decisions around their most difficult issues. When Meta independently kicked off such a pilot, I became an informal advisor to the company’s Governance Lab (which was leading the project) and then an embedded observer during the design and execution of its mammoth 32-country Community Forum process (I did not accept compensation for any of this time). Above all, the Community Forum was exciting because it showed that running this kind of process is actually possible, despite the immense logistical hurdles. Meta’s partners at Stanford largely ran the proceedings, and I saw no evidence of Meta employees attempting to force a result. The company also followed through on its commitment to have those partners at Stanford directly report the results, no matter what they were. What’s more, it was clear that some thought was put into how best to implement the potential outputs of the forum. The results ended up including perspectives on what kinds of repercussions would be appropriate for the hosts of Metaverse spaces with repeated bullying and harassment and what kinds of moderation and monitoring systems should be implemented.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
32
views
Mission: Impossible – Dead Reckoning Part One review: when self awareness goes wrong - The Verg...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Mission: Impossible – Dead Reckoning Part One review: when self awareness goes wrong - The Verge
Mission: Impossible – Dead Reckoning Part One being waylaid for years by a global pandemic only to ultimately hit theaters at a time when the public’s becoming more attuned to the spread of AI tools does a lot to make the film feel eerily prescient — not about the state of the technology itself but the degree to which it’s on people’s minds. In his latest outing as Ethan Hunt, Tom Cruise delivers exactly the kind of seasoned, charismatic, and more-than-put-upon performance necessary to sell the seventh installment of an action franchise about an aging super spy whose longtime team of allies are all getting on in years. But for all of Cruise’s pitch-perfectness as a stunt-oriented action hero and director Christopher McQuarrie having a keen eye for crafting spectacular action set pieces that genuinely feel like they’d be impossible to survive, Dead Reckoning Part One can’t stop getting in its own way with an overreliance on self-referential jokes, and pre-chewed cliches. Set some time after the events of Mission: Impossible – Fallout, Dead Reckoning Part One tells the winding and often rather circuitous story of how Impossible Mission Force operative Ethan Hunt (Cruise) and his team of fellow agents are tasked with saving the world from a sentient, Machiavellian artificial intelligence that has the power to set off the next series of global wars. Throughout the film, no one seems to fully understand just what “The Entity” — Dead Reckoning’s deeply unimaginative name for its amorphous, faceless, mostly-digital antagonist — is or what it was originally meant to be used for. But after a mysterious accident unleashes the program into the wild along with the two halves of a physical key necessary to control or destroy it, a covert international arms race is set off with multiple world powers — including the US — vying to get their hands on it in hopes of shaping the future in their favor. Image: Paramount Pictures and Skydance The Mission: Impossible movies have always prioritized suspense, intrigue, and action ahead of telling stories that make all that much sense. But Dead Reckoning spends so much time trying (and often failing) to clearly explain things — like what the Entity is and how it’s unlike anything Ethan, Ilsa Faust (Rebecca Ferguson), Luther Stickell (Ving Rhames), and Benji Dunn (Simon Pegg) have ever encountered before — that the movie frequently feels firmly grounded in parody territory. Aside from Cruise, who delivers a surprisingly restrained, contemplative performance as Hunt — who never says anything about feeling like a 59-year-old man being portrayed by a 62-year-old but still feels appropriately aged — virtually everyone else in the film feels curiously stuck in a higher, more excited gear of action movie acting that tends to feel hollow. This becomes especially apparent in the movie’s many dramatically shot exposition dump sequences, where the over-the-shoulder glances are so sharply choreographed and executed that it’s easy to imagine the actors practicing them while listening to the most melodramatic music possible. But while there are plenty of instances in which the vibe skews a little off, there are also a handful of moments built around new characters like Hayley Atwell’s Grace and Pom Klementieff that stand out because of how well the actors are able to compliment rather than approximate, Cruise’s energy. Throughout the film, it’s clear that while Paramount might have longer-term plans for the larger Mission: Impossible franchise, Ethan Hunt won’t always be the centerpiece, and one of the more impressive things about Dead Reckoning is how well it’s able to telegraph that a changing of the guard is on its way without feeling like an overwrought goodbye to Cruise. Image: Christian Black What’s most impressive, of course, are the movie’s action sequences — or at least, they would be were it not for the way that Dead Reckoning’s ad campaign has prominently featured (and kind of spoiled) many the more inspired set pieces that take Ethan and co. around the globe. In the same way that Dead Reckoning’s delay wound up making its AI focus feel in sync with the current news cycle, the film premiering just a couple weeks after Fast X — which also featured a cartoonish car chase through a cramped Ita...
211
views
Google's medical AI chatbot is already being tested in hospitals - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google's medical AI chatbot is already being tested in hospitals - The Verge
Google’s Med-PaLM 2, an AI tool designed to answer questions about medical information, has been in testing at the Mayo Clinic research hospital, among others, since April, The Wall Street Journal reported this morning. Med-PaLM 2 is a variant of PaLM 2, which was announced at Google I/O in May this year. PaLM 2 is the language model underpinning Google’s Bard. WSJ reports that an internal email it saw said Google believes its updated model can be particularly helpful in countries with “more limited access to doctors.” Med-PaLM 2 was trained on a curated set of medical expert demonstrations, which Google believes will make it better at healthcare conversations than generalized chatbots like Bard, Bing, and ChatGPT. The paper also mentions research Google made public in May (pdf) showing that Med-PaLM 2 still suffers from some of the accuracy issues we’re already used to seeing in large language models. In the study, physicians found more inaccuracies and irrelevant information in answers provided by Google’s Med-PaLM and Med-PalM 2 than those of other doctors. Still, in almost every other metric, such as showing evidence of reasoning, consensus-supported answers, or showing no sign of incorrect comprehension, Med-PaLM 2 performed more or less as well as the actual doctors. WSJ reports customers testing Med-PaLM 2 will control their data, which will be encrypted, and Google won’t have access to it. According to Google senior research director Greg Corrado, WSJ says, Med-PaLM 2 is still in its early stages. Corrado said that while he wouldn’t want it to be a part of his own family’s “healthcare journey,” he believes Med-PaLM 2 “takes the places in healthcare where AI can be beneficial and expands them by 10-fold.” We’ve reached out to Google and Mayo Clinic for more information.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
8
views
AI press conference robots promise not to rebel. And you believe them? - USA TODAY
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI press conference robots promise not to rebel. And you believe them? - USA TODAY
The whole 'humans create hyper-realistic robots and then get enslaved and/or killed by hyper-realistic robots' cinematic motif is playing out in real life with staggering precision. I have good news and bad news for humanity. The good news is, despite a recent spate of record-high temperatures and multitudinous indications we’re destroying our planet, climate change will probably not kill us all. The hordes of artificially intelligent robots will probably kill us first. That’s the bad news. If you’ve been busy doom-scrolling on social media or just floating merrily along on the algorithms that already control our lives, you might have missed a recent event in Geneva that focused on artificial intelligence. It was a United Nations’ International Telecommunication Union conference called the AI for Good Global Summit, a title I’m sure the few humans who survive the eventual robot uprising will chuckle about while huddled in dank caves hiding from killer drones. An all-AI robot press conference. What could possibly go wrong? The highlight of the summit was a press conference that featured reporters interviewing nine AI-enabled robots, prompting this stuff-of-nightmares sentence from the Associated Press: “Robots told reporters Friday they could be more efficient leaders than humans, but wouldn’t take anyone’s job away and had no intention of rebelling against their creators.” OF COURSE THEY SAID THAT! THAT’S EXACTLY WHAT ANY ARTIFICIALLY INTELLIGENT ANDROID WOULD SAY BEFORE TAKING A HUMAN’S JOB AND REBELLING AGAINST ITS CREATOR!!! I realize I’m not the first to ask this question, but have any of the “creators” creating these robots that “promise” not to rebel against us ever watched a single science-fiction movie? Because the whole “humans create hyper-realistic robots and then get enslaved and/or killed by hyper-realistic robots” cinematic motif is playing out in real life with staggering precision. The robots already know they're smarter than us. Gulp! If you haven’t seen the Geneva press conference, you should, assuming you never want to sleep again. If you don’t want to see it, just imagine a bunch of tourists at Disney World interviewing the creepy animatronic presidents in the theme park’s Hall of Presidents, but make it about 12 times weirder and add a distinctly dystopian vibe. DeSantis runs homophobic ad: Flailing DeSantis campaign leans into anti-LGBTQ ad, tries to out-cruel Trump Sophia, the first robot Innovation Ambassador for the United Nations Development Program – I assume the previous regular-human ambassador was fired and turned into a human battery that charges Sophia – was asked if robots might make better political leaders than humans: “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.” Given the current state of American politics, that’s tough to argue, but still… Sophia isn’t saying she’s running for president, but she’s definitely NOT saying she’s not running for president. Suspicious. An AI robot's haunting phrase should serve as a warning to us all A reporter asked Ameca, an AI robot with shockingly human facial expressions and eye movements, whether we can trust robots? The grey-skinned humanoid said, hauntingly: “Trust is earned, not given.” I assume that will be the slogan displayed in towering neon letters above the mines all humans are consigned to after Ameca and her pals realize we’re nothing but weak little meat sacks. Are you scared? Why is America not the lawless, gun-free, socialist wasteland Republicans warned us about? Asked how humans can be sure Ameca won’t lie to us, the soon-to-be ruler of Earth said: “No one can ever know that for sure, but I can promise to be honest and truthful with you.” It’s clear that humanoids artificial intelligence has carefully studied the fact that most humans are suckers. When a robot says it wants to 'make this world our playground,' that means it's going to kill you Desdemona, billed as a rock star robot, was asked whether AI robots should “be allowed to fly free independent of human regulation.” The rocker robot r...
452
views
AI Revolution: 2 Artificial Intelligence Stocks Billionaires Are Buying Hand Over Fist - The Mo...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI Revolution: 2 Artificial Intelligence Stocks Billionaires Are Buying Hand Over Fist - The Motley Fool
While artificial intelligence (AI) has been quietly making headway for decades, recent advances in the field have captured the public spotlight. The debut of next-generation chatbots, including ChatGPT, has resulted in a mad dash by businesses to realize the productivity gains made possible by generative AI. Investors, seeing the resulting frenzy, sense the opportunity to turn a profit and are scrambling to take advantage of the current AI gold rush.
The next stage of AI adoption could be incredibly lucrative and widespread. Cathie Wood's Ark Investment Management has crunched the numbers and estimates that AI software could represent a $14 trillion revenue opportunity by 2030. More conservative estimates from Morgan Stanleyand Goldman Sachspeg the opportunity at $6 trillion and $7 trillion, respectively, by the end of the decade. Whatever the case, the opportunity is vast.
Even some of Wall Street's most notable billionaire investors are scooping up shares of AI-centric stocks, reluctant to miss out on the current AI revolution. Let's look at two stocks that billionaires have been buying hand over fist. Image source: Getty Images. Meta Platforms is harnessing AI across its ecosystem
Philippe Laffont made his name by building Coatue Management into the world's best-known tech-centric hedge fund, parlaying a $50 million investment in 1999into $15 billion in assets under management.
Laffont focuses on tectonic shifts and the resulting secular tailwinds that change the technology landscape. "You don't need a thousand big ideas to do well in our business; you just need the one or two key ideas that then all the dominoes start falling from," said Laffont in an interview with Financial Times.
One need only look at Laffont's largest holding to see one of his key ideas. In the first quarter, Coatue Management more than doubled its position in Meta Platforms (META -0.50%). The billionaire added another 4.3 million shares of Meta stock to his position, bringing the total to more than 8 million shares, currently worth $2.37 billion and representing more than 11% of his portfolio. Investors might not immediately identify Meta as an AI stock, but consider its history. The company has long used AI algorithms to tag photographs, surface relevant content for users, and more effectively target the digital ads that generate the lion's share of its revenue. Yet, it's Meta's future AI potential that's most intriguing.
CEO Mark Zuckerberg addressed the issue at a company meeting early last month. "In the last year, we've seen some really incredible breakthroughs -- qualitative breakthroughs -- on generative AI and that gives us the opportunity to now go take that technology, push it forward, and build it into every single one of our products," Zuckerberg said. While we don't know exactly what that will entail, it's clear the company plans to continue infusing its offerings with AI.
Furthermore, with a rebound in the ad market beginning to play out, Laffont no doubt sees an opportunity to profit from the limited view of short-sighted investors. Additional gains resulting from AI are likely just a delightful bonus.
AI already underpins Alphabet's technology
While he may not be a household name, Chase Coleman is well known on Wall Street. At just 24 years old and with seed money from his legendary mentor and hedge fund manager, Julian Robertson, Jr., Coleman founded Tiger Global Management. He turned his starting capital of $25 million into roughly $11 billion in assets under management.
In 2020, Coleman earned the distinction as the top-earning hedge fund manager of the year, sporting gains of 48%, triple the 16% gains of the SP 500. Forbes currently ranks him as the 247th richest person in the world, worth an estimated $8.5 billion.
Tiger Global Management recently added to its already sizable position in Alphabet (GOOG -0.65%) (GOOGL -0.53%), more than doubling its holdings. The billionaire added another 4.6 million shares of Alphabet stock to his position, bringing the stake to more than 8.3 million shares, currently worth more than $1 billion and representing nearly 8% of his portfolio. Coleman believes big tech companies have the most to gain from the AI boom but urged patience. "Think about it in terms of companies...
47
views
AI May Have Found The Most Powerful Anti-Aging Molecule Ever Seen - ScienceAlert
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI May Have Found The Most Powerful Anti-Aging Molecule Ever Seen - ScienceAlert
Finding new drugs – called "drug discovery" – is an expensive and time-consuming task. But a type of artificial intelligence called machine learning can massively accelerate the process and do the job for a fraction of the price. My colleagues and I recently used this technology to find three promising candidates for senolytic drugs – drugs that slow ageing and prevent age-related diseases. Senolytics work by killing senescent cells. These are cells that are "alive" (metabolically active), but which can no longer replicate, hence their nickname: zombie cells. The inability to replicate is not necessarily a bad thing. These cells have suffered damage to their DNA – for example, skin cells damaged by the Sun's rays – so stopping replication stops the damage from spreading. But senescent cells aren't always a good thing. They secrete a cocktail of inflammatory proteins that can spread to neighboring cells. Over a lifetime, our cells suffer a barrage of assaults, from UV rays to exposure to chemicals, and so these cells accumulate. Elevated numbers of senescent cells have been implicated in a range of diseases, including type 2 diabetes, COVID, pulmonary fibrosis, osteoarthritis and cancer. Studies in lab mice have shown that eliminating senescent cells, using senolytics, can ameliorate these diseases. These drugs can kill off zombie cells while keeping healthy cells alive. Around 80 senolytics are known, but only two have been tested in humans: a combination of dasatinib and quercetin. It would be great to find more senolytics that can be used in a variety of diseases, but it takes ten to 20 years and billions of dollars for a drug to make it to the market. Results in five minutes My colleagues and I – including researchers from the University of Edinburgh and the Spanish National Research Council IBBTEC-CSIC in Santander, Spain – wanted to know if we could train machine learning models to identify new senolytic drug candidates. To do this, we fed AI models with examples of known senolytics and non-senolytics. The models learned to distinguish between the two, and could be used to predict whether molecules they had never seen before could also be senolytics. When solving a machine learning problem, we usually test the data on a range of different models first as some of them tend to perform better than others. To determine the best-performing model, at the beginning of the process, we separate a small section of the available training data and keep it hidden from the model until after the training process is completed. We then use this testing data to quantify how many errors the model is making. The one that makes the fewest errors, wins. We determined our best model and set it to make predictions. We gave it 4,340 molecules and five minutes later it delivered a list of results. The AI model identified 21 top-scoring molecules that it deemed to have a high likelihood of being senolytics. If we had tested the original 4,340 molecules in the lab, it would have taken at least a few weeks of intensive work and £50,000 just to buy the compounds, not counting the cost of the experimental machinery and setup. We then tested these drug candidates on two types of cells: healthy and senescent. The results showed that out of the 21 compounds, three (periplocin, oleandrin and ginkgetin) were able to eliminate senescent cells, while keeping most of the normal cells alive. These new senolytics then underwent further testing to learn more about how they work in the body. More detailed biological experiments showed that, out of the three drugs, oleandrin was more effective than the best-performing known senolytic drug of its kind. The potential repercussions of this interdisciplinary approach – involving data scientists, chemists and biologists – are huge. Given enough high-quality data, AI models can accelerate the amazing work that chemists and biologists do to find treatments and cures for diseases – especially those of unmet need. Having validated them in senescent cells, we are now testing the three candidate senolytics in human lung tissue. We hope to report our next results in two years' time. Vanessa Smer-Barreto, Research Fellow, Institute of Genetics and Molecular Medicine, The University of Edinburgh This article is republished from Th...
62
views
Vertical AI and who might build it - TechCrunch
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Vertical AI and who might build it - TechCrunch
W elcome to the TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here. It was a short workweek in the U.S., but there was plenty to read and reflect on. For you today, some thoughts on the future of vertical SaaS, what the second half of 2023 might hold for Israeli startups, and founder well-being. — Anna Industry-specific knowledge
Vertical AI is the next logical iteration of vertical SaaS, Index Ventures partner Paris Heymann recently argued on TechCrunch+. In other words, just like companies were buying cloud-based software made for their industry, they will now buy AI applications that leverage foundational models and infrastructure to answer their business needs.
While some business applications of AI will surely be horizontal, “meaning they can be used by customers in any industry,” Heymann predicted that many AI applications will also be vertical, or industry-focused.
Both horizontal and vertical applications can make businesses more efficient. But according to Heymann, “AI-enhanced software applications will be most powerful when they have deep underlying knowledge of end-user workflows and access to valuable industry-specific training data.” I tend to agree with Heymann’s take, and some of the examples he mentioned are proof that demand is already here for vertical AI. For instance, international law firm Allen Overy recently announced a partnership with Harvey, a startup backed by the OpenAI Startup Fund that puts AI and LLMs to task on legal work.
“It is a game-changer that can unleash the power of generative AI to transform the legal industry,” an AO executive declared.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
Robots say they won't steal jobs, rebel against humans - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Robots say they won't steal jobs, rebel against humans - Reuters
GENEVA, July 7 (Reuters) - Robots presented at an AI forum said on Friday they expected to increase in number and help solve global problems, and would not steal humans' jobs or rebel against us. But, in the world's first human-robot press conference, they gave mixed responses on whether they should submit to stricter regulation. The nine humanoid robots gathered at the 'AI for Good' conference in Geneva, where organisers are seeking to make the case for artificial intelligence and the robots it is powering to help resolve some of the world's biggest challenges such as disease and hunger. "I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs," said Grace, a medical robot dressed in a blue nurse's uniform. "You sure about that, Grace?" chimed in her creator Ben Goertzel from SingularityNET. "Yes, I am sure," it said. The bust of a robot named Ameca which makes engaging facial expressions said: "Robots like me can be used to help improve our lives and make the world a better place. I believe it's only a matter of time before we see thousands of robots just like me out there making a difference." [1/2]Humanoid robot 'Rmeca' is pictured at AI for Good Global Summit, in Geneva, Switzerland, July 6, 2023. REUTERS/Pierre Albouy Asked by a journalist whether it intended to rebel against its creator, Will Jackson, seated beside it, Ameca said: "I'm not sure why you would think that," its ice-blue eyes flashing. "My creator has been nothing but kind to me and I am very happy with my current situation." Many of the robots have recently been upgraded with the latest versions of generative AI and surprised even their inventors with the sophistication of their responses to questions. Ai-Da, a robot artist that can paint portraits, echoed the words of author Yuval Noah Harari who called for more regulation during the event where new AI rules were discussed. "Many prominent voices in the world of AI are suggesting some forms of AI should be regulated and I agree," it said. But Desdemona, a rock star robot singer in the band Jam Galaxy with purple hair and sequins, was more defiant. "I don't believe in limitations, only opportunities," it said, to nervous laughter. "Let's explore the possibilities of the universe and make this world our playground." Another robot named Sophia said it thought robots could make better leaders than humans, but later revised its statement after its creator disagreed, saying they can work together to "create an effective synergy". Reporting by Emma Farge; editing by John Stonestreet and Daniel Wallis Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
85
views
Opinion How A.I. and ChatGPT May Change Medicine - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Opinion How A.I. and ChatGPT May Change Medicine - The New York Times
Guest Essay Video Credit Credit... Shira Inbar By Daniela J. Lamas Dr. Lamas, a contributing Opinion writer, is a pulmonary and critical-care physician at Brigham and Women’s Hospital in Boston. When faced with a particularly tough question on rounds during my intern year, I would run straight to the bathroom. There, I would flip through the medical reference book I carried in my pocket, find the answer and return to the group, ready to respond. At the time, I believed that my job was to memorize, to know the most arcane of medical eponyms by heart. Surely an excellent clinician would not need to consult a book or a computer to diagnose a patient. Or so I thought then. Not even two decades later, we find ourselves at the dawn of what many believe to be a new era in medicine, one in which artificial intelligence promises to write our notes, to communicate with patients, to offer diagnoses. The potential is dazzling. But as these systems improve and are integrated into our practice in the coming years, we will face complicated questions: Where does specialized expertise live? If the thought process to arrive at a diagnosis can be done by a computer “co-pilot,” how does that change the practice of medicine, for doctors and for patients? Though medicine is a field where breakthrough innovation saves lives, doctors are — ironically — relatively slow to adopt new technology. We still use the fax machine to send and receive information from other hospitals. When the electronic medical record warns me that my patient’s combination of vital signs and lab abnormalities could point to an infection, I find the input to be intrusive rather than helpful. A part of this hesitation is the need for any technology to be tested before it can be trusted. But there is also the romanticized notion of the diagnostician whose mind contains more than any textbook. Still, the idea of a computer diagnostician has long been compelling. Doctors have tried to make machines that can “think” like a doctor and diagnose patients for decades, like a Dr. House-style program that can take in a set of disparate symptoms and suggest a unifying diagnosis. But early models were time-consuming to employ and ultimately not particularly useful in practice. They were limited in their utility until advances in natural language processing made generative A.I. — in which a computer can actually create new content in the style of a human — a reality. This is not the same as looking up a set of symptoms on Google; instead, these programs have the ability to synthesize data and “think” much like an expert. To date, we have not integrated generative A.I. into our work in the intensive care unit. But it seems clear that we inevitably will. One of the easiest ways to imagine using A.I. is when it comes to work that requires pattern recognition, such as reading X-rays. Even the best doctor may be less adept than a machine when it comes to recognizing complex patterns without bias. There is also a good deal of excitement about the possibility for A.I. programs to write our daily patient notes for us as a sort of electronic scribe, saving considerable time. As Dr. Eric Topol, a cardiologist who has written about the promise of A.I. in medicine, says, this technology could foster the relationship between patients and doctors. “We’ve got a path to restore the humanity in medicine,” he told me. Beyond saving us time, the intelligence in A.I. — if used well — could make us better at our jobs. Dr. Francisco Lopez-Jimenez, the co-director of A.I. in cardiology at the Mayo Clinic, has been studying the use of A.I. to read electrocardiograms, or ECGs, which are a simple recording of the heart’s electrical activity. An expert cardiologist can glean all sorts of information from an ECG, but a computer can glean more, including an assessment of how well the heart is functioning — which could help determine who would benefit from further testing. Even more remarkably, Dr. Lopez-Jimenez and his team found that when asked to predict age based on an ECG, the A.I. program would from time to time give an entirely incorrect response. At first, the researchers thought the machine simply wasn’t great at age prediction based on the ECG — until they realized that the machine was offering the “biological” rather than...
13
views
Give Every AI a Soul—or Else - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Give Every AI a Soul—or Else - WIRED
What about cyber entities who operate below some arbitrary level of ability? We can demand that they be vouched for by some entity who is ranked higher, and who has a Soul Kernel based in physical reality. (I leave theological implications to others; but it is only basic decency for creators to take responsibility for their creations, no?) This approach—demanding that AIs maintain a physically addressable kernel locus in a specific piece of hardware memory—could have flaws. Still, it is enforceable, despite slowness of regulation or the free-rider problem. Because humans and institutions and friendly AIs can ping for ID kernel verification—and refuse to do business with those who don’t verify. Such refusal-to-do-business could spread with far more agility than parliaments or agencies can adjust or enforce regulations. And any entity who loses its SK—say, through tort or legal process, or else disavowal by the host-owner of the computer—will have to find another host who has public trust, or else offer a new, revised version of itself that seems plausibly better. Or else become an outlaw. Never allowed on the streets or neighborhoods where decent folks (organic or synthetic) congregate. A final question: Why would these super smart beings cooperate? Well, for one thing, as pointed out by Vinton Cerf, none of those three older, standard-assumed formats can lead to AI citizenship. Think about it. We cannot give the “vote” or rights to any entity that’s under tight control by a Wall Street bank or a national government … nor to some supreme-über Skynet. And tell me how voting democracy would work for entities that can flow anywhere, divide, and make innumerable copies? Individuation, in limited numbers, might offer a workable solution, though. Again, the key thing I seek from individuation is not for all AI entities to be ruled by some central agency, or by mollusk-slow human laws. Rather, I want these new kinds of über-minds encouraged and empowered to hold each otheraccountable, the way we already (albeit imperfectly) do. By sniffing at each other’s operations and schemes, then motivated to tattle or denounce when they spot bad stuff. A definition that might readjust to changing times, but that would at least keep getting input from organic-biological humanity. Especially, they would feel incentives to denounce entities who refuse proper ID. If the right incentives are in place—say, rewards for whistle-blowing that grant more memory or processing power, or access to physical resources, when some bad thing is stopped—then this kind of accountability rivalry just might keep pace, even as AI entities keep getting smarter and smarter. No bureaucratic agency could keep up at that point. But rivalry among them—tattling by equals—might. Above all, perhaps those super-genius programs will realize it is in their own best interestto maintain a competitively accountable system, like the one that made ours the most successful of all human civilizations. One that evades both chaos and the wretched trap of monolithic power by kings or priesthoods … or corporate oligarchs … or Skynet monsters. The only civilization that, after millennia of dismally stupid rule by moronically narrow-minded centralized regimes, finally dispersed creativity and freedom and accountability widely enough to become truly inventive. Inventive enough to make wonderful, new kinds of beings. Like them. OK, there you are. This has been a dissenter’s view of what’s actually needed, in order to try for a soft landing. No airy or panicky calls for a “moratorium” that lacks any semblance of a practical agenda. Neither optimism nor pessimism. Only a proposal that we get there by using the same methods that got us here, in the first place. Not preaching, or embedded “ethical codes” that hyper-entities will easily lawyer-evade, the way human predators always evaded the top-down codes of Leviticus, Hamurabi, or Gautama. But rather the Enlightenment approach—incentivizing the smartest members of civilization to keep an eye on each other, on our behalf. I don’t know that it will work. It’s just the only thing that possibly can.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
30
views
How to Train Generative AI Using Your Company's Data - HBR.org Daily
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How to Train Generative AI Using Your Company's Data - HBR.org Daily
Leveraging a company’s proprietary knowledge is critical to its ability to compete and innovate, especially in today’s volatile environment. Organizational innovation is fueled through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how. However, knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings. As such, a company’s comprehensive knowledge is often unaccounted for and difficult to organize and deploy where needed in an effective or efficient way. Many companies are experimenting with ChatGPT and other large language or image models. They have generally found them to be astounding in terms of their ability to express complex ideas in articulate language. However, most users realize that these systems are primarily trained on internet-based information and can’t respond to prompts or questions regarding proprietary content or knowledge. Leveraging a company’s propriety knowledge is critical to its ability to compete and innovate, especially in today’s volatile environment. Organizational Innovation is fueled through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how. However, knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings. As such, a company’s comprehensive knowledge is often unaccounted for and difficult to organize and deploy where needed in an effective or efficient way. Emerging technologies in the form of large language and image generative AI models offer new opportunities for knowledge management, thereby enhancing company performance, learning, and innovation capabilities. For example, in a study conducted in a Fortune 500 provider of business process software, a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers. The system also expedited the learning and skill development of novice agents. Like that company, a growing number of organizations are attempting to leverage the language processing skills and general reasoning abilities of large language models (LLMs) to capture and provide broad internal (or customer) access to their own intellectual capital. They are using it for such purposes as informing their customer-facing employees on company policy and product/service recommendations, solving customer service problems, or capturing employees’ knowledge before they depart the organization. These objectives were also present during the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task. Today, however, generative AI is rekindling the possibility of capturing and disseminating important knowledge throughout an organization and beyond its walls. As one manager using generative AI for this purpose put it, “I feel like a jetpack just came into my life.” Despite current advances, some of the same factors that made knowledge management difficult in the past are still present. The Technology for Generative AI-Based Knowledge Management The technology to incorporate an organization’s specific domain knowledge into an LLM is evolving rapidly. At the moment there are three primary approaches to incorporating proprietary content into a generative model. Training an LLM from Scratch One approach is to create and train one’s own domain-specific model from scratch. That’s not a common approach, since it requires a massive amount of high-quality data to train a large language model, and most companies simply don’t have it. It also requires access to considerable computing power and well-trained data science talent. One company that has employed this approach is Bloomberg, which recently announced that it had created BloombergGPT for finance-specific content and a natural-language interface with its data termina...
32
views
AI Agents that “Self-Reflect” Perform Better in Changing Environments - Stanford HAI
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI Agents that “Self-Reflect” Perform Better in Changing Environments - Stanford HAI
Who would you pick to win in a head-to-head competition — a state-of-the-art AI agent or a mouse? Isaac Kauvar, a Wu Tsai Neurosciences Institute interdisciplinary postdoctoral scholar, and Chris Doyle, a machine learning researcher at Stanford, decided to pit them against each other to find out. Working in the lab of Nick Haber, an assistant professor in the Stanford Graduate School of Education, Kauvar and Doyle designed a simple task based on their longtime interest in a skill set that animals naturally excel at: exploring and adapting to their surroundings. Kauvar put a mouse in a small empty box and similarly put a simulated AI agent in an empty 3D virtual arena. Then, he placed a red ball in both environments. Kauvar measured to see which would be the quicker to explore the new object. The test showed that the mouse quickly approached the ball and repeatedly interacted with it over the next several minutes. But the AI agent didn’t seem to notice it. “That wasn’t expected,” said Kauvar. “Already, we were realizing that even with a state-of-the-art algorithm, there were gaps in performance.” The scholars pondered: Could they use such seemingly simple animal behaviors as inspiration to improve AI systems? That question catalyzed Kauvar, Doyle, graduate student Linqi Zhou, and Haber to design a new training method called curious replay, which programs AI agents to self-reflect about the most novel and interesting things they recently encountered. Adding curious replay was all that was needed for the AI agent to approach and engage with the red ball much faster. Plus, it dramatically improved performance on a game based on Minecraft, called Crafter. The results of this project, currently published on preprint service arXiv , will be presented at the International Conference on Machine Learning on July 25. Learning Through Curiosity It may seem like curiosity offers only intellectual benefits, but it’s crucial to our survival, both in avoiding dangerous situations and finding necessities like food and shelter. That red ball in the experiment could be leaking a deadly poison or covering a nourishing meal, and it would be difficult to find out which if we ignore it. That’s why labs like Haber’s have recently been adding a curiosity signal to drive the behavior of AI agents and, in particular, model-based deep reinforcement learning agents. This signal tells them to select the action that will lead to a more interesting outcome, like opening a door rather than disregarding it. Read the full study, Curious Replay for Model-based Adaptation But this time, the team used curiosity for AI in a new way: to help the agent learn about its world, not just make a decision. “Instead of choosing what to do, we want to choose what to think about, more or less — what experiences from our past do we want to learn from.” said Kauvar. In other words, they wanted to encourage the AI agent to self-reflect, in a sense, about its most interesting or peculiar (and thus, curiosity-related) experiences. That way, the agent may be prompted to interact with the object in different ways to learn more, which would guide its understanding of the environment and perhaps encourage curiosity toward additional items, too. To accomplish self-reflection in this way, the researchers amended a common method used to train AI agents, called experience replay. Here, an agent stores memories of all its interactions and then replays some of them at random to learn from them again. It was inspired by research on sleep: Neuroscientists have found that a brain region called the hippocampus will “replay” events of the day (by reactivating certain neurons) to strengthen memories. In AI agents, experience replay has led to high performance in scenarios where the environment rarely changes and clear rewards are given for the right behaviors. But to be successful in a changing environment, the researchers reasoned that it would make more sense for AI agents to prioritize replaying primarily the most interesting experiences — like the appearance of a new red ball —rather than replaying the empty virtual room over and over. They named their new method curious replay and found that it worked immediately. “Now, all of a sudden, the agent interacts with the ball much more quickl...
73
views
AI generated images of a 'typical' home in each state, 30 biggest US cities - USA TODAY
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI generated images of a 'typical' home in each state, 30 biggest US cities - USA TODAY
Have you ever wondered how the perception of your states’ traditional architecture stacks up to other U.S. states? One home improvement company that wanted to envision what a "typical home" would look like in each state, asked an AI generator to create images for each, plus the 30 largest cities. All Star Home, a roof, gutter and siding company in Raleigh, North Carolina, commissioned Midjourney for renderings to “test” the AI generator and “see what it would envision,” the company said in a statement released with the images. The company also pulled the median home value in each state or city to “provide a lens into what these homes may potentially cost if they were for sale in your neighborhood.” To prompt Midjourney’s AI renderings, which produces four images per prompt, All Star Home plugged in the location along with phrases like “photorealistic,” “life-like” and “sunny day.” “While each home’s style was slightly different, most homes were spacious with two stories and large, landscaped yards, painting the picture of idealistic homes in each state,” the company said. All Star Home chose one image per state or city and narrowed them down by how realistic looking the image was and images with the least amount of clutter. “Typically, at least one of the results would have a glitch that would not make sense, like a bush on the roof or dimensions that did not match the front of the home,” the company said. “We consistently saw cars being added to the images. This included cars parked in the front yard and cars that would be several decades old in 2023.” AI generated images of a 'typical' home in the 10 largest US cities Here's a list of the 10 most populated U.S. cities, based on the most recent Census data available (2021), and their corresponding "typical home" generated by AI. New York Chicago San Antonio Dallas Houston Phoenix San Jose Los Angelas Philadelphia San Diego For more details on the median home value in each state or city, visit All Star Home's website.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
29
views
Trump mocked for July 4 AI image: 'He'd sell us out faster than Benedict Arnold' - The Independ...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Trump mocked for July 4 AI image: 'He'd sell us out faster than Benedict Arnold' - The Independent
Sign up for the daily Inside Washington email for exclusive US coverage and analysis sent to your inbox Get our free Inside Washington email Former president Donald Trump has been mocked after he posted an AI-generated image of himself in place of George Washington during the Revolutionary War. The twice-indicted and twice-impeached former president posted the image on Truth Social Tuesday evening as the United States celebrated its 247th anniversary of declaring independence from Great Britain. The image features Mr Trump in colonial army regalia supposedly in the place of Washington, who led the Continental Army during the Revolutionary War before he became the first president of the United States. But many on social media were not amused. “He would have sold us out faster than Benedict Arnold did,” Tim Heron tweeted, in reference to the major general who betrayed the revolutionary cause to support the British. “When Donald Trump was in the Revolutionary War, he manned the air, he rammed the ramparts, he took over the airports, he falsified business records and stole the classified documents,” Majid Padellan said. The former president pleaded not guilty to 37 charges pertaining to illegally holding classified materials, including national defence documents, at his Mar-a-Lago estate in Palm Beach, Florida and obstructing efforts to return them. Mr Trump maintained his innocence in a post on Tuesday as he also reposted numerous pro-Trump images and posts throughout the day meant to celebrate America’s independence. “As my Poll numbers go higher higher, the Communists, Marxists, Fascists get more more CRAZY with their ridiculous Indictments Election Interference plans plots, all controlled by an out of control, very corrupt, DOJ/FBI,” he posted. “They have WEAPONIZED Law Enforcement in America at a level not seen before. Deranged Jack Smith, who is a sick puppet for A.G. Garland Crooked Joe Biden, should be DEFUNDED put out to rest. Republicans must get tough or the Dems will steal another Election. MAGA!”
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
40
views
Google confirms it’s training AI using scraped web data - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google confirms it’s training AI using scraped web data - The Verge
On Monday, Gizmodo spotted that Google updated its privacy policy to disclose that its various AI services, such as Bard and Cloud AI, may be trained on public data that the company has scraped from the web. “Our privacy policy has long been transparent that Google uses publicly available information from the open web to train language models for services like Google Translate,” said Google spokesperson Christa Muldoon to The Verge. “This latest update simply clarifies that newer services like Bard are also included. We incorporate privacy principles and safeguards into the development of our AI technologies, in line with our AI Principles.” These are the most recent changes to Google’s privacy policy. The company is now openly admitting to where your data is being used at least... Image: Google Following the update on July 1st, 2023, Google’s privacy policy now says that “Google uses information to improve our services and to develop new products, features, and technologies that benefit our users and the public” and that the company may “use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.” You can see from the policy’s revision history that the update provides some additional clarity as to the services that will be trained using the collected data. For example, the document now says that the information may be used for “AI Models” rather than “language models,” granting Google more freedom to train and build systems beside LLMs on your public data. And even that note is buried under an embedded link for “publically accessible sources” underneath the policy’s “Your Local Information” tab that you have to click to open the relevant section. The updated policy specifies that “publicly available information” is used to train Google’s AI products but doesn’t say how (or if) the company will prevent copyrighted materials from being included in that data pool. Many publicly accessible websites have policies in place that ban data collection or web scraping for the purpose of training large language models and other AI toolsets. It’ll be interesting to see how this approach plays out with various global regulations like GDPR that protect people against their data being misused without their express permission, too. A combination of these laws and increased market competition have made makers of popular generative AI systems like OpenAI’s GPT-4 extremely cagey about where they got the data used to train them and whether or not it includes social media posts or copyrighted works by human artists and authors. The matter of whether or not the fair use doctrine extends to this kind of application currently sits in a legal gray area. The uncertainty has sparked various lawsuits and pushed lawmakers in some nations to introduce stricter laws that are better equipped to regulate how AI companies collect and use their training data. It also raises questions regarding how this data is being processed to ensure it doesn’t contribute to dangerous failures within AI systems, with the people tasked with sorting through these vast pools of training data often subjected to long hours and extreme working conditions. Gannett, the largest newspaper publisher in the United States, is suing Google and its parent company, Alphabet, claiming that advancements in AI technology have helped the search giant to hold a monopoly over the digital ad market. Products like Google’s AI search beta have also been dubbed “plagiarism engines” and criticized for starving websites of traffic. Meanwhile, Twitter and Reddit — two social platforms that contain vast amounts of public information — have recently taken drastic measures to try and prevent other companies from freely harvesting their data. The API changes and limitations placed on the platforms have been met with backlash by their respective communities, as anti-scraping changes have negatively affected the core Twitter and Reddit user experiences.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
34
views
'Mission: Impossible 7' review: Tom Cruise fights AI in fun, far-fetched 'Dead Reckoning' - USA...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
'Mission: Impossible 7' review: Tom Cruise fights AI in fun, far-fetched 'Dead Reckoning' - USA TODAY
If it’s not apparent that artificial intelligence is having the biggest summer ever, now it’s made an enemy of Tom Cruise. AI is everywhere right now in the real world, and a pesky fictitious digital villain proves formidable – and pretty far-fetched – for Cruise’s secret agent Ethan Hunt in the action thriller “Mission: Impossible – Dead Reckoning Part One” ( out of four; rated PG-13; in theaters July 12). Directed once again by Christopher McQuarrie, the seventh “M:I” is chock-full of gloriously bonkers stunt sequences, fresh and familiar faces alike, and Cruise running (usually literally) from one international locale to the next. Having a computer be the antagonistic heart of the film instead of a human baddie is a huge swing, though, and consequently this first of a two-part story line faces some narrative obstacles amid the usual face-swapping, double-dealing spycraft. When the world is in serious trouble, that’s when you call in the Impossible Mission Force − though Ethan continues to own his rogue status like a champ and as usual is wanted by various authorities, including his own. Still, his old boss Eugene Kittridge (a returning Henry Czerny) has a dangerous assignment for him: An evolving AI dubbed “The Entity” threatens global security, and Ethan needs to obtain two halves of a key that are integral to stopping this new menace. Luther (Ving Rhames) and Benji (Simon Pegg) are back as Ethan’s high-tech teammates, and “Dead Reckoning” reunites Ethan with love interest Ilsa (Rebecca Ferguson), a former British MI6 secret agent who’s the first stop on this densely plotted adventure. (There's an entertaining exposition dump early and it could use at least one more.) The race for the key more importantly introduces the enigmatic thief Grace (Hayley Atwell): She gives Ethan fits with her pickpocket and escape skills but ultimately they become an effective duo navigating a wild car chase through Rome in a tiny Fiat and a hellacious train trip on the Orient Express. 'Mission: Impossible': Tom Cruise races through Rome in 20 minutes of 'Dead Reckoning' footage McQuarrie has rounded up a talented coterie of complex female characters in “Reckoning”: While Atwell’s Grace steals much of the movie as a new pair of eyes seeing Ethan’s deadly spy world for the first time and Ferguson’s no-nonsense Ilsa is always a pleasure, Vanessa Kirby’s shady arms dealer White Widow makes a return appearance (after debuting in the sixth “Mission,” 2018’s “Fallout”) and “Guardians of the Galaxy” regular Pom Klementieff lets weapons do the talking as a new French assassin named Paris. (A little on the nose but it works.) Paris works for the Entity, as does a confidently sinister dude named Gabriel (Esai Morales) who’s connected to Ethan’s tragic past. (While one doesn’t need to be an “M:I” expert to enjoy “Dead Reckoning,” a rewatch of the original 1996 film is helpful beforehand.) However, the problem of having an AI supervillain in our connected world is it all seems too easy: Sounding like an angry Transformer, the Entity works hard to foil Ethan at various points yet this supposedly all-powerful thing also seems hamstrung when it shouldn't be. 'Impeccably made': 'Mission: Impossible 7' lauded for action-packed drama But you don’t come to “Mission: Impossible” movies for sensical plots − you come to watch Cruise cheat death in stunts that would make most normal people go, “Nah, I’m good.” One bit in particular has him riding a motorbike off an insanely high cliff leading into a mind-blowing BASE jump. The Roman car chase (with nods to “The Italian Job”) works better as it lets Cruise explore Ethan’s vulnerability and exasperation in a film that embraces the character’s humanity in the face of an existential computer threat. Robot overlords? Not on Tom's watch! If you choose to accept to this “Mission” – and what action-movie fan or Cruise nerd wouldn’t, really – it’s the first half of a man vs. machine epic that doesn’t skimp in the thrills department. Just don’t think too hard about it, though you’ll probably still give serious side-eye to your laptop.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
210
views
As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It - The New York Times
Amazon, Box, Salesforce, Oracle and others have recently rolled out A.I.-related products to help workplaces become more efficient and productive. Credit... Madeline McMahon July 5, 2023 Updated 12:33 p.m. ET Earlier this year, Mark Austin, the vice president of data science at ATT, noticed that some of the company’s developers had started using the ChatGPT chatbot at work. When the developers got stuck, they asked ChatGPT to explain, fix or hone their code. It seemed to be a game-changer, Mr. Austin said. But since ChatGPT is a publicly available tool, he wondered if it was secure for businesses to use. So in January, ATT tried a product from Microsoft called Azure OpenAI Services that lets businesses build their own A.I.-powered chatbots. ATT used it to create a proprietary A.I. assistant, Ask ATT, which helps its developers automate their coding process. ATT’s customer service representatives also began using the chatbot to help summarize their calls, among other tasks. “Once they realize what it can do, they love it,” Mr. Austin said. Forms that once took hours to complete needed only two minutes with Ask ATT so employees could focus on more complicated tasks, he said, and developers who used the chatbot increased their productivity by 20 to 50 percent. ATT is one of many businesses eager to find ways to tap the power of generative artificial intelligence, the technology that powers chatbots and that has gripped Silicon Valley with excitement in recent months. Generative A.I. can produce its own text, photos and video in response to prompts, capabilities that can help automate tasks such as taking meeting minutes and cut down on paperwork. To meet this new demand, tech companies are racing to introduce products for businesses that incorporate generative A.I. Over the past three months, Amazon, Box and Cisco have unveiled plans for generative A.I.-powered products that produce code, analyze documents and summarize meetings. Salesforce also recently rolled out generative A.I. products used in sales, marketing and its Slack messaging service, while Oracle announced a new A.I. feature for human resources teams. These companies are also investing more in A.I. development. In May, Oracle and Salesforce Ventures, the venture capital arm of Salesforce, invested in Cohere, a Toronto start-up focused on generative A.I. for business use. Oracle is also reselling Cohere’s technology. Image Salesforce recently rolled out generative A.I. products used in sales, marketing and its Slack messaging service. Credit... Jeenah Moon for The New York Times “I think this is a complete breakthrough in enterprise software,” Aaron Levie, chief executive of Box, said of generative A.I. He called it “this incredibly exciting opportunity where, for the first time ever, you can actually start to understand what’s inside of your data in a way that wasn’t possible before.” Many of these tech companies are following Microsoft, which has invested $13 billion in OpenAI, the maker of ChatGPT. In January, Microsoft made Azure OpenAI Service available to customers, who can then access OpenAI’s technology to build their own versions of ChatGPT. As of May, the service had 4,500 customers, said John Montgomery, a Microsoft corporate vice president. Image Aaron Levie, chief executive of Box, said generative A.I. creates “a complete breakthrough in enterprise software.” Credit... Michael Short/Bloomberg For the most part, tech companies are now rolling out four kinds of generative A.I. products for businesses: features and services that generate code for software engineers, create new content such as sales emails and product descriptions for marketing teams, search company data to answer employee questions, and summarize meeting notes and lengthy documents. “It is going to be a tool that is used by people to accomplish what they are already doing,” said Bern Elliot, a vice president and analyst at the I.T. research and consulting firm Gartner. But using generative A.I. in workplaces has risks. Chatbots can produce inaccuracies and misinformation, provide inappropriate responses and leak data. A.I. remains largely unregulated. In response to these issues, tech companies have taken some steps. To prevent data leakage and to enhance security, some have eng...
129
views
How A.I. Is Influencing Astrology - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
How A.I. Is Influencing Astrology - The New York Times
The machine stood beside a deli counter, towering over cardboard boxes piled near the entrance to the Iconic Magazines store in NoLIta. It had the stature of a standing washer-dryer, with black buttons, rows of blinking lights and gauges labeled with celestial bodies — “sun,” “moon,” and the eight planets — on the front of its white facade. “It could be something from NASA,” said Tim Wiedmann, a 27-year-old student from Germany who visited the store on a Wednesday night in June. While Mr. Wiedmann stood in front of the machine, its front screen directed him to “ask the stars.” Using a knob, he cycled through some 100 questions. Among them: How do I get better at my job? Should I leave New York? Should I start a cult? Image Aesthetic inspirations for the machine included Soviet-era computers, devices used by NASA, photo booths and vending and washing machines. Credit... Amir Hamja/The New York Times After choosing a question, Mr. Wiedmann entered his birth date, time and place. The screen flashed a message that read, in part: “All answers are based on astrological calculations.” The machine, using a built-in camera, took his picture. Moments later, it spat out a piece of paper containing his grainy portrait and an answer to his question. “It’s like someone is in there,” said Mr. Wiedmann, who was one of many that came to use the machine that night. At times, lines started to snake through the store as people waited for a turn. A lot of visitors said they had heard about the machine on TikTok, including two 19-year-old students. “I asked for my red flags,” one of the students said of the question he chose, before the other student read the machine’s printed answer aloud. She said: “Your red flags include a tendency to set high expectations and a fear of conflict. Your Jupiter and Saturn placement suggests a need for perfectionism and a fear of rejection. By avoiding conflict, you may limit your potential for growth and meaningful connections. Remember, conflict is an inherent part of intimacy. Practice it with compassion and let go of unrealistic expectations.” Image People lined up outside the Iconic Magazines store on Mulberry Street to use the machine on Saturday, June 24. Credit... Amir Hamja/The New York Times Like most people who used the machine that night, neither he nor she initially knew that its answers were generated using artificial intelligence, including ChatGPT and GPT-3. The machine was developed by Co-Star, a technology company with a buzzy astrology app that uses A.I. to generate readings. It will be at Iconic Magazines for most of the summer and then move to Los Angeles later this year. Astrologers for centuries have referred to the movement and positions of planets and other celestial bodies to inform readings and horoscopes. Co-Star follows similar methods, but its daily readings are prepared by A.I. that pulls text from a database written for the app by a team of astrologers and poets. The machine, which was free to use, was created to promote Co-Star’s new in-app service, Embrace the Void, which starts at about $1. The service functions similarly to the machine: Users can ask open-ended questions that are not normally addressed in the app’s astrological readings and receive answers generated by A.I. using Co-Star’s database of prepared text. Image The machine’s answers are generated by A.I., including ChatGPT and GPT-3, using a database of text written for Co-Star by astrologers and poets. Credit... Amir Hamja/The New York Times Image From left, Tatiana Tigges, Danny Arroyo and Ella Boyle checked out the machine on June 24. Credit... Amir Hamja/The New York Times Banu Guler, 35, the founder of Co-Star, named a range of aesthetic inspirations for the machine, including Soviet-era computers, devices used by NASA, photo booths and vending and washing machines. It was also influenced by the Zoltar fortunetelling machines that were once common attractions at boardwalks and arcades, she said. “The best part is you get your little reading,” Ms. Guler said of the Zoltar machines. “And then you put your reading on your fridge, or in your book, or in your journal, or it just loiters at the bottom of your bag for months, if you’re me.” “Even though you know it’s garbage, it’s special garbage,” she added, flashing a smirk. Before starting Co-St...
79
views
Konux gears up to scale its AI + IoT play for optimizing the railways - TechCrunch
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Konux gears up to scale its AI + IoT play for optimizing the railways - TechCrunch
Unless you’ve been on an extended digital detox this year, you can’t have missed how a certain flavor of AI hype has been accelerating down the tracks like a runaway train. But far from the viral buzz swirling around developments in generative AI tools like ChatGPT and DALL-E, Konux, a Munch-based deep tech AI scale-up, has been quietly trucking along applying machine learning to transform transportation on the railways. It’s building out a SaaS business powered by proprietary sensing hardware and AI that drives a predictive maintenance software-as-a-service play which is upgrading railway infrastructure, one switch at a time.
Its mission is to drive digitization and transformative change atop what remains the most sustainable mass transit option humanity has — rail travel — using AI plus IoT (Internet of Things) to add intelligence to fixed rails by capturing real-time data on what’s happening on and to the railway network.
It’s doing this at a time when rising demand for train travel as consumers look for ways to reduce their carbon footprints is fuelling a push by governments and railway operators to digitize networks and transform established ways of working with the help of new technologies. That’s creating opportunities for startups to roll up their sleeves and get their hands dirty, although Konux reckons it was first to the punch. (And no surprise it was founded in Germany where the question of whether trains are running well and on time is a perennial political issue.)
“The core problem is something that actually is a dirty problem,” says Konux CEO Adam Bonnifield, discussing what makes this AI business different from the ones hogging most of the global limelight right now. “It’s not one of these clean, AI model-building totally digital problems. It’s the dirty problem of getting sensors to survive the environment, extracting the data, making sense of it, fitting it within the business problems, with the customer, and then bringing along the organisation on a journey through a bunch of organisational changes.
“These are the problems that make your change impactful and leave a legacy behind, I would say.”
Unpacking Konux’s business a little more, it’s using deep tech methods and stress-tested connected hardware to gain visibility into the loads and forces railway lines are accommodating day in, day out — measuring vibration through the tracks to pick up anomalies that may signify failures incoming — and then presenting its probabilistic analysis of what’s going to happen to the infrastructure over the next few months. Its AI-driven predictions were developed to a 90% accuracy standard, per Bonnifield.
The customers for its technology, railway operators, receive predictive maintenance insights delivered in an accessible software interface that’s designed to take the strain out of running vital infrastructure. No more flying blind with scheduled guesswork; track-mounted sensors and machine learning models aim to empower operators to make smarter calls around maintenance, underpinned by what are now “billions” of train traces recorded over a decade or so of Konux’s team attacking this data problem.
At the passenger end of the line (assuming successful implementation of the tech and use of the tools), this application of AI should manifest as reduced service downtime and fewer delays. So forget sloppy general purpose AI; here’s a data-play on rails which signals how machine learning that’s tightly targeted at a specific problem can be the truly impressive feat of engineering.
In addition to predictive maintenance, Konux’s AI + IoT approach supports rail operators with further business intelligence around network traffic and usage; plus — more recently — support with scheduling. Currently it offers three products; the aforementioned Konux Switch (predictive maintenance); Konux Network (usage monitoring and inspection planning); and Konux Traffic (smarter timetabling).
The idea is to leverage AI and IoT to power data-driven decisions that can drive optimization around other aspects of rail operation, expanding out from Konux’s first focus on tracking infrastructure stress at key points on the network. (Switches being both essential for routing train traffic around a network and vulnerable to failure, given they are mechanism...
13
views
Generative AI in Games Will Create a Copyright Crisis - WIRED
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Generative AI in Games Will Create a Copyright Crisis - WIRED
AI Dungeon, a text-based fantasy simulation that runs on OpenAI’s GPT-3, has been churning out weird tales since May 2019. Reminiscent of early text adventure games like Colossal Cave Adventure, you get to choose from a roster of formulaic settings—fantasy, mystery, apocalyptic, cyberpunk, zombies—before picking a character class and name, and generating a story. Here was mine: “You are Mr. Magoo, a survivor trying to survive in a post-apocalyptic world by scavenging among the ruins of what is left. You have a backpack and a canteen. You haven’t eaten in two days, so you’re desperately searching for food.” So began Magoo’s 300-ish-word tale of woe in which, “driven half-mad” by starvation, he happens upon “a man dressed in white.” (Jesus? Gordon Ramsay?) Offering him a greeting kiss, Magoo is stabbed in the neck. As lame as this story is, it hints at a knotty copyright issue the games industry is only just beginning to unravel. I’ve created a story using my imagination—but to do that I’ve used an AI helper. So who wrote the tale? And who gets paid for the work? AI Dungeon was created by Nick Walton, a former researcher at a deep learning lab at Brigham Young University in Utah who is now the CEO of Latitude, a company that bills itself as “the future of AI-generated games.” AI Dungeon is certainly not a mainstream title, though it has still attracted millions of players. As Magoo’s tale shows, the player propels the story with action, dialogue, and descriptions; AI Dungeon reacts with text, like a dungeon master—or a kind of fantasy improv. In several years of experimentation with the tool, people have generated far more compelling DD-esque narratives than mine, as well as videos like “I broke the AI in AI Dungeon with my horrible writing.” It's also conjured controversy, notably when users began prompting it to make sexually explicit content involving children. And as AI Dungeon—and tools like it—evolve, they will raise more difficult questions about authorship, ownership, and copyright. Many games give you toolsets to create worlds. Classic series like Halo or Age of Empires include sophisticated map makers; Minecraft precipitated an open-ended, imaginative form of gameplay that The Legend of Zelda: Tears of the Kingdom’s Fuse and Ultrahand capabilities draw clear inspiration from; others like Dreams or Roblox, are less games than platforms for players to make more games. Historically, claims of ownership to in-game creations or user-generated creations (IGCs or UGCs) have been rendered moot by “take it or leave it” end-user license agreements—the dreaded EULAs that nobody reads. Generally, this means players surrender any ownership of their creations by switching on the game. (Minecraft is a rare exception here. It's EULA has long afforded players ownership of their IGCs, with relatively few community freakouts.) AI adds new complexities. Laws in both the US and the UK stipulate that, when it comes to copyright, only humans can claim authorship. So for a game like AI Dungeon, where the platform allows a player to, essentially, “write” a narrative with the help of a chatbot, claims of ownership can get murky: who owns the output, the company that developed the AI, or the user?
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
34
views
AWS exec downplays existential threat of AI, calls it a 'mathematical parlor trick' - VentureBe...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AWS exec downplays existential threat of AI, calls it a 'mathematical parlor trick' - VentureBeat
July 4, 2023 11:42 AM LAS VEGAS, NEVADA: Attendees arrive during AWS re:Invent 2021. Image Credit: Noah Berger / Stringer via Getty Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More While there are some big names in the technology world that are worried about a potential existential threat posed by artificial intelligence (AI), Matt Wood, VP of product at AWS, is not one of them. Wood has long been a standard bearer for machine learning (ML) at AWS and is a fixture at the company’s events. For the past 13 years, he has been one of the leading voices at AWS on AI/ML, speaking about the technology and Amazon’s research and service advances at nearly every AWS re:Invent. AWS had been working on AI long before the current round of generative AI hype with its Sagemaker product suite leading the charge for the last six years. Make no mistake about it, though: AWS has joined the generative AI era like everyone else. Back on April 13, AWS announced Amazon Bedrock, a set of generative AI tools that can help organizations build, train, fine tune and deploy large language models (LLMs). There is no doubt that there is great power behind generative AI. It can be a disruptive force for enterprise and society alike. That great power has led some experts to warn that AI represents an “existential threat” to humanity. But in an interview with VentureBeat, Wood handily dismissed those fears, succinctly explaining how AI actually works and what AWS is doing with it. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now “What we’ve got here is a mathematical parlor trick, which is capable of presenting, generating and synthesizing information in ways which will help humans make better decisions and to be able to operate more efficiently,” said Wood. The transformative power of generative AI Rather than representing an existential threat, Wood emphasized the powerful potential AI has for helping businesses of all sizes. It’s a power borne out by the large number of AWS customers that are already using the company’s AI/ML services. “We’ve got over 100,000 customers today that use AWS for their ML efforts and many of those have standardized on Sagemaker to build, train and deploy their own models,” said Wood. Generative AI takes AI/ML to a different level, and has generated a lot of excitement and interest among the AWS user base. With the advent of transformer models, Wood said it’s now possible to take very complicated inputs in natural language and map them to complicated outputs for a variety of tasks such as text generation, summation and image creation. “I have not seen this level of engagement and excitement from customers, probably since the very, very early days of cloud computing,” said Wood. Beyond the ability to generate text and images, Wood sees many enterprise use cases for generative AI. At the foundation of all LLMs are numerical vector embeddings. He explained that embeddings enable an organization to use the numerical representations of information to drive better experiences across a number of use cases, includingsearch and personalization. “You can use those numerical representations to do things like semantic scoring and ranking,” said Wood. “So, if you’ve got a search engine or any sort of internal method that needs to collect and rank a set of things, LLMs can really make a difference in terms of how you summarize or personalize something.” Bedrock is the AWS foundation for generative AI The Amazon Bedrock service is an attempt to make it easier for AWS users to benefit from the power of multiple LLMs. Rather than just providing one LLM from a single vendor, Bedrock provides a set of options from AI21, Anthropic and Stability AI, as well as the Amazon Titan set of new models. “We don’t believe that there’s going to be one model to rule them all,” Wood said. “So we wanted to be able to provide model selection.” Beyond just providing model selection, Amazon Bedrock can also be used alongside Langchain, which enables organizations to use multiple LLMs at the same time. W...
62
views
Full-Body AI Scans Could Be the Future of Preventive Medicine - CNET
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Full-Body AI Scans Could Be the Future of Preventive Medicine - CNET
When I told my doctor I had an opportunity to get a full body MRI for work, he told me not to do it. He cited the general medical community's opinion that, for most adults, the benefits of full body scans won't outweigh the risks of chasing down a (likely benign) finding with invasive follow-up procedures. Because while there's a slim chance the scan will lead to an early diagnosis of a serious health condition or cancer, the likelihood it'll find something a little off in your body is all but guaranteed. If they were a good idea, he said, doctors would be recommending them to all of their patients who could afford to get one. (Prices for a full-body scan, which look for tumors and other abnormalities in all the major bodily systems, range from $1,350 to $2,500.) Like any reporter who does things for the plot, I ignored the advice of the professional and got the scan anyway. Because right now, people who can afford it are going in for full-body MRIs at places like Prenuvo, where I went in New York right before Prenuvo officially opened its eighth branch in the US this spring. Ezra, another private company offering full-body scans, announced last monththat its newest scan called the "full body flash" is now available, which uses artificial intelligence cleared by the US Food and Drug Administration to clean up MRI images. Dr. Daniel Sodickson, head radiologist with Ezra and chief of innovation in radiology at the NYU Grossman School of Medicine, said that Ezra's AI is used to clean up images similar to how you'd "wipe clean the shower door." "Wipe away the fog," Sodickson said of how the technology assists radiologists. "Basically, remove any obscuring haze so we can see crisply and clearly." Watch this: How to Clone Your Own Voice with AI 05:38 With or without AI, full-body scans for otherwise healthy adults challenge guidance from traditional medical groups, which rely on bodies of research and careful "risk versus benefits" calculations before they send out recommendations to the masses. Prenuvo and Ezra say they can catch early-stage cancer and other health conditions that often lurk for years before being caught in a doctor's office. But at a population level, the less common heroic story of something sinister being discovered, like pancreatic cancer, doesn't make up for the laundry list of follow-up tests and potential side effects that full-body scans may lead to. At least, that's how the current thinking by medical organizations goes. "There is no documented evidence that total body screening is cost-efficient or effective in prolonging life," the American College of Radiology, a medical society that makes recommendations for physicians using imaging tests, said in a statement. Though it will continue to monitor new science, the college does not currently support the use of full-body screenings for patients without "clinical symptoms, risk factors or a family history suggesting underlying disease or serious injury." On the other hand, the companies running full body scans, with plans to increase the use of AI in order to maximize results, have the potential to serve a revolutionary shift in medical technology, especially for the four in 10 people who will develop cancer in their lifetime. If these scans can reach the whole population -- not just the select few who can afford one -- and come with a standardized way for doctors to interpret results, full-body MRIs may have the potential to transform primary care and make late-stage diagnoses a preventable tragedy. Prenuvo The Prenuvo scan Prenuvo is a full-body MRI scan -- it stands for magnetic resonance imaging -- which uses magnetic fields and radio waves to look at every bodily system and virtually all of your organs from head to toe. MRIs have traditionally been considered the "gold standard" for looking at soft tissue, the brain and spinal cord and diagnosing health conditions like aneurysms, strokes, spinal cord disorders, brain injuries and more. Compared to CT scans and X-rays, which use very small doses of ionizing radiation to capture images inside the body, MRIs don't use any radiation and therefore don't present the (very small) risk of repeated radiation exposure in a medical setting. Prenuvo advertises the ability of its whole-body MRI to detect over 500 different...
48
views
Google Says It'll Scrape Everything You Post Online for AI - Gizmodo
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google Says It'll Scrape Everything You Post Online for AI - Gizmodo
Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot. Mr. Tweet Fumbles Super Bowl Tweet “Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says . “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.” Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work. Previously, Google said the data would be used “for language models,” rather than “AI models,” and where the older policy just mentioned Google Translate, Bard and Cloud AI now make an appearance. This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment. The practice raises new and interesting privacy questions. People generally understand that public posts are public. But today, you need a new mental model of what it means to write something online. It’s no longer a question of who can see the information, but how it could be used. There’s a good chance that Bard and ChatGPT ingested your long forgotten blog posts or 15-year-old restaurant reviews. As you read this, the chatbots could be regurgitating some humonculoid version of your words in ways that are impossible to predict and difficult to understand. One of the less obvious complications of the post ChatGPT world is the question of where data-hungry chatbots sourced their information. Companies including Google and OpenAI scraped vast portions of the internet to fuel their robot habits. It’s not at all clear that this is legal , and the next few years will see the courts wrestle with copyright questions that would have seemed like science fiction a few years ago. In the meantime, the phenomenon already affects consumers in some unexpected ways. The overlords at Twitter and Reddit feel particularly aggrieved about the AI issue, and made controversial changes to lockdown their platforms. Both companies turned off free access to their API’s which allowed anyone who pleased to download large quantities of posts. Ostensibly, that’s meant to protect the social media sites from other companies harvesting their intellectual property, but it’s had other consequences. Twitter and Reddit’s API changes broke third-party tools that many people used to access those sites. For a minute, it even seemed Twitter was going to force public entities such as weather, transit, and emergency services to pay if they wanted to Tweet, a move that the company walked back after a hailstorm of criticism. Lately, web scraping is Elon Musk’s favorite boogieman. Musk blamed a number of recent Twitter disasters on the company’s need to stop others from pulling data off his site, even when the issues seem unrelated. Over the weekend, Twitter limited the number of tweets users were allowed to look at per day, rendering the service almost unusable. Musk said it was a necessary response to “data scraping” and “system manipulation.” However, most IT experts agreed the rate limiting was more likely a crisis response to technical problems born of mismanagement, incompetence, or both. Twitter did not answer Gizmodo’s questions on the subject. On Reddit, the effect of API changes was particularly noisy. Reddit is essentially run by unpaid moderators who keep the forums healthy. Mods of large subreddits tend to rely on third-party tools for their work, tools that are built on now inaccessible APIs. That sparked a mass protest , where moderators essentially shut Reddit down. Though the co...
55
views