Netflix touts $900k AI jobs amid Hollywood strikes - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Netflix touts $900k AI jobs amid Hollywood strikes - BBC
Image source, Getty Images Netflix has triggered an angry response from striking Hollywood actors and writers after posting a job advert for an artificial intelligence (AI) expert. The new position would join its Machine Learning Platform team, which drives the Netflix algorithm helping viewers pick new programmes to watch. It pays up to $900,000 (£700,000) per year, fuelling further outrage. Hollywood unions are striking over concerns about how AI affects the entertainment industry and pay. The job listing, which was first reported by The Intercept on Tuesday, is one of several listed on the Netflix job page that calls for applicants with experience in machine learning (ML) and AI. It is unclear from the expansive job advert whether the role will advise on content - the TV programmes and films that Netflix chooses to invest in. Another open listing for a product manager on the Machine Learning Platform team says the future employee will "collect feedback and understand user needs" and ultimately helping with investment decisions. The description appears to suggest that the role will include using AI to assess funding needs for different programmes. This is a key concern of the union representing actors, Sag-Aftra, which has spoken of its fears that algorithms have too much power. Sag-Aftra's Fran Drescher told Time magazine the singular success of any film or television programme is now much less important than when broadcast television was dominant. "Algorithms dictate how many episodes a season needs to be before you reach a plateau of new subscribers and how many seasons a series needs to be on," she claimed. "That reduces the amount of episodes per season to between six and 10, and it reduces the amount of seasons to three or four. You can't live on that. "We're being systematically squeezed out of our livelihood by a business model that was foisted upon us, that has created a myriad of problems for everyone up and down the ladder." The writers' guild, WGA, has proposed a system that regulates the use of AI in the writing process and prevents it being used as source material. Netflix declined to comment about the job listings, but has previously said AI will not replace the creative process. "The best stories are original, insightful and often come from people's own experiences," Netflix has said. The news of the most recent AI-based job listing was condemned by some striking actors, who must earn $26,470 before being eligible for health insurance benefits. "So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for Sag-Aftra health insurance is just ghoulish," actor Rob Delaney, told The Intercept. Javier Grillo-Marxuach, who is best known for the series Lost, accused Netflix of "pleading poverty while recruiting VERY (more than I've ever made in a year BY FAR) well-paid generals for your soulless army of silicon plagiarists". Earlier this week, Netflix announced the launch of a new app - My Netflix - which the company calls "a one-stop shop tailored to you with easy shortcuts to help you choose what you want to watch". Media caption, Watch: Brian Cox: 'I am concerned about artificial intelligence'
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
8
views
Google Shares How It Treats .AI Domains For SEO - Search Engine Journal
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Google Shares How It Treats .AI Domains For SEO - Search Engine Journal
In a Google SEO Office hours session, Google’s Gary Illyes answered the question about whether there was a downside to using the .AI domain since it’s associated with the Caribbean island of Anguilla. His answer was kind of surprising.
The Difference Between a gTLD and a ccTLD
There are two kinds of domain names. There are gTLD and ccTLD.
gTLD
A gTLD is a Generic Top Level Domain. These kinds of domains are not associated with any country and can be used worldwide.
Typical gTLDs are .com, .net, .org, .biz, .xyz and so on.
ccTLD
A ccTLD is a TLD (top level domain) that is associated with a specific country.
Examples of ccTLDs are .uk and .in, which are associated with the countries of the United Kingdom and India.
Google uses ccTLDs to localize the websites that use them with the countries those TLDs are associated with.
If a country uses a .in TLD, Google knows that it is relevant for people in India.
The .in TLD helps Google to determine which country that domain name is relevant to.
This aligns with how people of the world generally expect the Internet to work.
.AI is a ccTLD
Some ccTLDs have a meaning that goes beyond the country they are associated with.
For example, the tiny island of Tuvalu has a ccTLD of .TV.
The .TV ccTLD is useful for websites that want to be branded as (or relevant for) having to do with television.
Similarly, .AI is a ccTLD that is associated with the island of Anguilla, located in the Caribbean.
Is it Okay for a Global Company to Use .AI?
The person asking the question wanted to know if it’s okay to use a ccTLD like .AI.
The concern is whether using the .AI ccTLD might unintentionally localize the website to the island of Anguilla and make it harder for it to rank in other countries like the United States or anywhere else. This is the question that was asked: “Should a global company use the .ai domain as their gTLD or is it considered by Google as a ccTLD for the country of Anguilla?” Google’s Gary Illyes offers what might seem like a surprising answer. Gary’s response: “As of early June, 2023, we treat .ai as a gTLD in Google Search, so yeah, you can use it for your global presence!” I say it’s a surprising answer because .AI is a popular TLD that is used by many companies.
It may have been commonly assumed that .AI domains were already treated by Google as gTLDs instead of as a ccTLD but that wasn’t always the case.
Google didn’t make the change to treat .AI as a gTLD until June 2023.
Gary’s answer calls attention to the importance of verifying if a domain extension chosen for a website is treated as a ccTLD or a gTLD because that could make a difference in the website ability to rank worldwide.
Using a top level domain that Google treats as localized to a specific country could negatively affect the website’s ability to rank outside of the one country the ccTLD is associated with.
List of ccTLDs that Google Treads Like gTLDs
Google publishes as list of ccTLDs that are treated by Google as generic top level domains.
The list shows that ccTLDs like .eu and .asia are treated like gTLDs. Other international domains that are treated like gTLDs are .ad, .co, .fm, .tv and of course .ai.
Those aren’t the only ccTLDs on the list, there are many ccTLDs listed that are also treated by Google as if they were gTLDs. Watch the Google SEO Office Hours at the 14:39 minute mark: Featured image by Shutterstock/Krakenimages.com
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
8
views
Best Artificial Intelligence AI-Based Art Generators in 2023 - MarkTechPost
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Best Artificial Intelligence AI-Based Art Generators in 2023 - MarkTechPost
Dream by Wombo Dream By Wombo, in contrast to other AI picture generators, is capable of continuous image synthesis at no additional expense. This AI generator is a fantastic option if you’re on a tight budget or just starting off. Dream By Wombo is also very user-friendly. You must sign up, compose some content, and choose an image style before you can begin. If you don’t like the type of vision generated for you, you can always start anew. DALL-E 2 OpenAI released DALL-E 2 in 2021. OpenAI developed a new version of their image-generating AI model DALL-E called DALL-E 2. DALL-E 2 is designed, like its forerunner, to generate professional-standard images in response to textual input. DALL-E 2 improves upon its predecessor in several ways, including its capacity to generate higher-quality, more nuanced images. DALL-E 2 can process more nuanced textual signals and respond to various visual representations. In addition, it can be adapted to certain uses or fields, such as when taking images of specific subjects or locations. Midjourney Midjourney is arguably the best artificial intelligence (AI) picture generator because of its wide range of capabilities and extremely quick synthesis speed. Send an SMS command to Midjourney, which will take care of the rest. Many creative professionals use Midjourney to generate the images that serve as inspiration for their work. The artificial intelligence piece “Théâtre d’Opéra Spatial,” made with Midjourney, beat out 20 other painters to take first place in the fine art category at the Colorado State Fair. However, for the time being, Midjourney can be found on a Discord server. You must join the MidJourney Discord server and utilize the bot’s commands to make images. However, that’s easy, and you can start working right away. Dream Studio (Stable Diffusion) Dream Studio, also known as Stable Diffusion, is a popular text-to-image AI generator. It’s a free and public model that can instantly visualize text suggestions. Photographs, illustrations, 3D models, and even logos are all within Dream Studio’s purview of possible creation. Photorealistic artwork can be made by combining a user-uploaded image with a written description. Craiyon With a website and an app available on the Google Play Store for Android devices, Craiyon is a fascinating artificial intelligence picture generator. The free version of DALL-E, formerly DALL-E Mini, performs the same features as the commercial version. You can make decent pictures based on the textual explanations. Unfortunately, Craiyon’s server instability often results in lengthy delays during the creation process and unfortunate design flaws. The images may be used for personal and commercial uses, provided they are given appropriate credit to Craiyon and the Terms of Use are followed. FotorAI Image Generator The company provides the FotorAI Image Generator, which uses AI technology to generate original photographs. Users can input a sample image, and from that, a whole new image will be generated. This new function utilizes a Generative Adversarial Network (GAN) to create reportedly high-resolution, photorealistic images. It has many applications, including creating original artwork for digital media. You can only get it in the paid edition of Fotor. Nightcafe Nightcafe is the most advanced artificial intelligence text-to-image generator available today. You can make unique graphics that perfectly capture your intent with only the most fundamental English phrases. Nightcafe additionally offers a wide variety of creatives and styles that can be used to make original digital works of art. Neural style transfer can turn ordinary photos into works of art. Nightcafe’s intuitive interface makes it suitable for novice users. The site’s aesthetic simplicity and ease of use allow anyone to edit and enhance photographs with a few mouse clicks. Any content you make will be safely stored in your account without needing you to back it up elsewhere. IMGCreator.ai IMGCreator.ai is an AI tool that can create any picture from text. It provides a wealth of choices, from lifelike images for a blog post to professionally crafted illustrations or animation. If you want a better result from your photo, give as much detail as possible in your text query. You may adjust the canvas size a...
44
views
6 Ways Small Businesses Can Use AI More - Entrepreneur
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
6 Ways Small Businesses Can Use AI More - Entrepreneur
Opinions expressed by Entrepreneur contributors are their own. One of the most pivotal innovations that artificial intelligence offers small businesses is the ability to change the rules of engagement — a more level playing field. And its adoption in this size category has been stunning: According to Unbounce's recent Break Free: The State of AI Marketing for Small Business report, approximately 30% of small and medium-sized businesses are leveraging some form of artificial intelligence. One overriding challenge, however, is that implementing the right tools is only half the battle: To be successful in applying this new technology, it's critical to equip employees with the guidelines and skills to make it both effective and profitable. Some pathways for achieving that: 1. Set clear team expectations As with any new technology, AI presents both advantages and potential risks, so it's critical to ensure that proper guardrails are in place to protect you, your company and its customers. This includes a clear and accessible policy, which should outline how AI can and should be used within an organization, which platforms are approved for its use and who is authorized to leverage it. This policy can and will evolve: Feel free to start simple and expand it as you implement more AI tools into existing processes. 2. Training, training, training While younger generations may be more intuitive when it comes to technology, this isn't the case for all employees. Before rolling out AI tools, it's important to provide a general overview of what it is and how it works. This training should include common terminology, understanding the differences among types (such as machine learning, deep learning and robotics), data cybersecurity, ethics and the challenges associated with AI biases. In addition, a team needs to understand this tech's current limitations. Too many people are jumping in blindly without grasping flaws or shortcomings that the human user needs to accommodate or otherwise address. For example, many platforms lack contextual understanding or the ability to incorporate human emotion, which can cause them to misinterpret input. Related: AI Isn't Evil — But Entrepreneurs Need to Keep Ethics in Mind As They Implement It 3. Enhance data and analytic skills One of the key benefits of AI is its ability to process, analyze and summarize large amounts of raw data, but the last thing you want is a team blindly inputting information and simply trusting the resulting analyses. Instead, provide additional training and education in the areas of data analysis, interpretation and visualization. The result will be better leveraging AI's abilities and catching errors that could cause poor results. 4. Promote strong AI communication skills AI is a technology designed in part to respond to inputs in the most efficient way. But, as with so many systems, garbage in will ultimately lead to garbage out. When a human manager asks a human worker to perform a task, there is a lot of information shared apart from just written or verbal instructions, such as body language, contextual phrasing and tone of voice. Unlike humans, AI is limited to a certain range of input types like text and images. So, businesses need to make sure their teams understand how to guide AI and ensure the technology clearly understands instructions before performing tasks. For this reason, employees need to learn to "speak AI" to get the most out of it. A question written in the wrong way could lead to an incorrect or poor response, so developing the ability to write high-quality AI prompts and queries will set users apart. This process also includes asking the right follow-up questions, since AI doesn't always get it right the first time. Related: 6 Ways Small Business Owners Can Use ChatGPT to Eliminate Hours of Work 5. Encourage new-tool use and experimentation AI is evolving at breakneck speed. New tools and platforms are emerging daily that can quickly cause a small business to lose its competitive edge. For this reason, encourage and incentivize team members to be on the lookout for new tools. For example, if your small business already uses a chatbot for customer service, its marketing team might consider also looking into image-based AI to help design creative graphics. To achieve all the goals listed...
35
views
Generative AI and Web3: Hyped nonsense or a match made in tech heaven - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Generative AI and Web3: Hyped nonsense or a match made in tech heaven - VentureBeat
July 22, 2023 8:20 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here Did I write this, or was it ChatGPT? It’s hard to tell, isn’t it? For the sake of my editors, I will follow that quickly with: I wrote this article (I swear). But the point is that it’s worth exploring generative artificial intelligence’s limitations and areas of utility for developers and users. Both are revealing. The same is true for Web3 and blockchain. While we’re already seeing the practical applications of Web3 and generative AI play out in tech platforms, online interactions, scripts, games and social media apps, we’re also seeing a replay of the responsible AI and blockchain 1.0 hype cycles of the mid-2010s. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now “We need a set of principles or ethics to guide innovation.” “We need more regulation.” “We need less regulation.” “There are bad actors poisoning the well for the rest of us.” “We need heroes to save us from AI and/or blockchain.” “Technology is too sentient.” “Technology is too limited.” “There is no enterprise-level application.” “There are countless enterprise-level applications.” If you exclusively read the headlines, you will come out the other side with the conclusion that the combo of generative AI and blockchain will either save the world or destroy it. All over again We’ve seen this play (and every act and intermission) before with the hype cycles of both responsible AI and blockchain. The only difference this time is that the articles we’re reading about ChatGPT’s implications may, in fact, have been written by ChatGPT. And the term blockchain has a bit more heft behind it thanks to investment from Web2 giants like Google Cloud, Mastercard and Starbucks. That said, it’s notable that OpenAI’s leadership recently called for an international regulatory body akin to the International Atomic Energy Agency (IAEA) to regulate and, when necessary, rein in AI innovation. The proactive move illuminates an awareness of both AI’s massive potential and potentially society-crumbling pitfalls. It also conveys that the technology itself is still in test mode. The other significant subtext: Public sector regulation at the federal and sub-federal levels commonly limits innovation. As with Web3, and whether or not regulatory action takes place, responsibility needs to be at the core of generative AI innovation and adoption. As the technology evolves rapidly, it’s important for vendors and platforms to assess every potential use case to ensure responsible experimentation and adoption. And, as OpenAI’s Sam Altman and Google’s Sundar Pichai notably point out, working with the public sector to evolve regulation is a significant part of that equation. It’s also important to surface limitations, transparently report on them, and provide guardrails if or when issues become apparent. While AI and blockchain have both been around for decades, the impact of AI, in particular, is now visible with ChatGPT, Bard and the entire field of generative AI players. Together with Web3’s decentralized power, we’re about to witness an explosion of practical applications that build on progress automating interactions and advancing Web3 in more visible ways. From a user-centric perspective (and whether we know it or not), generative AI and blockchain are both already transforming how people interact in the real world and online. Solana recently made it official with a ChatGPT integration. And exchange Bitget backed away from theirs. Promising or puzzling, every signal indicates that it remains to be seen where the technologies best intersect in the name of user experience and user-centric innovation. From where I sit as the head of a layer1 blockchain built for scale and interoperability, the question becomes: How should AI and blockchain join forces in pursuit of Web3’s own ChatGPT moment of mainstream adoption? Tools like ChatGPT and Bard will accelerate the next major waves of innovation on Web2 and Web3. The convergence of generative AI and Web3 will be like the pairing of peanut butter and jelly on fresh bread — but...
65
views
Sophisticated BundleBot Malware Disguised as Google AI Chatbot and Utilities - The Hacker News
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Sophisticated BundleBot Malware Disguised as Google AI Chatbot and Utilities - The Hacker News
Jul 21, 2023 THN Cyber Threat / Malware A new malware strain known as BundleBot has been stealthily operating under the radar by taking advantage of .NET single-file deployment techniques, enabling threat actors to capture sensitive information from compromised hosts.
"BundleBot is abusing the dotnet bundle (single-file), self-contained format that results in very low or no static detection at all," Check Point said in a report published this week, adding it is "commonly distributed via Facebook Ads and compromised accounts leading to websites masquerading as regular program utilities, AI tools, and games."
Some of these websites aim to mimic Google Bard, the company's conversational generative artificial intelligence chatbot, enticing victims into downloading a bogus RAR archive ("Google_AI.rar") hosted on legitimate cloud storage services such as Dropbox.
The archive file, when unpacked, contains an executable file ("GoogleAI.exe"), which is the .NET single-file, self-contained application ("GoogleAI.exe") that, in turn, incorporates a DLL file ("GoogleAI.dll"), whose responsibility is to fetch a password-protected ZIP archive from Google Drive.
The extracted content of the ZIP file ("ADSNEW-1.0.0.3.zip") is another .NET single-file, self-contained application ("RiotClientServices.exe") that incorporates the BundleBot payload ("RiotClientServices.dll") and a command-and-control (C2) packet data serializer ("LirarySharing.dll"). "The assembly RiotClientServices.dll is a custom, new stealer/bot that uses the library LirarySharing.dll to process and serialize the packet data that are being sent to C2 as a part of the bot communication," the Israeli cybersecurity company said.
The binary artifacts employ custom-made obfuscation and junk code in a bid to resist analysis, and come with capabilities to siphon data from web browsers, capture screenshots, grab Discord tokens, information from Telegram, and Facebook account details.
Check Point said it also detected a second BundleBot sample that's virtually identical in all aspects barring the use of HTTPS to exfiltrate the information to a remote server in the form of a ZIP archive.
"The delivering method via Facebook Ads and compromised accounts is something that has been abused by threat actors for a while, still combining it with one of the capabilities of the revealed malware (to steal a victim's Facebook account information) could serve as a tricky self-feeding routine," the company noted. The development comes as Malwarebytes uncovered a new campaign that employs sponsored posts and compromised verified accounts that impersonate Facebook Ads Manager to entice users into downloading rogue Google Chrome extensions that are designed to steal Facebook login information.
Users who click on the embedded link are prompted to download a RAR archive file containing an MSI installer file that, for its part, launches a batch script to spawn a new Google Chrome window with the malicious extension loaded using the "--load-extension" flag - start chrome.exe --load-extension="%~dp0/nmmhkkegccagdldgiimedpiccmgmiedagg4" "https://www.facebook.com/business/tools/ads-manager" UPCOMING WEBINAR Shield Against Insider Threats: Master SaaS Security Posture Management Worried about insider threats? We've got you covered! Join this webinar to explore practical strategies and the secrets of proactive security with SaaS Security Posture Management. Join Today "That custom extension is cleverly disguised as Google Translate and is considered 'Unpacked' because it was loaded from the local computer, rather than the Chrome Web Store," Jérôme Segura, director of threat intelligence at Malwarebytes, explained, noting it is "entirely focused on Facebook and grabbing important pieces of information that could allow an attacker to log into accounts."
The captured data is subsequently sent using the Google Analytics API to get around content security policies (CSPs) to mitigate cross-site scripting (XSS) and data injection attacks.
The threat actors behind the activity are suspected to be of Vietnamese origin, who have, in recent months, exhibited acute interest in targeting Facebook business and advertising accounts. Over 800 victims worldwide have been impacted, with 310 of those located in the...
29
views
Redditors prank AI-powered news mill with “Glorbo” in World of Warcraft - Ars Technica
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Redditors prank AI-powered news mill with “Glorbo” in World of Warcraft - Ars Technica
HUMANS: 1 ROBOTS: 0 — "Glorbo" isn't real, but a news-writing AI model didn't know it—and then it wrote about itself. Benj Edwards - Jul 21, 2023 4:27 pm UTC Enlarge / A World of Warcraft illustration from the Zleague.gg article on "Glorbo." Zleague.gg On Thursday, a Reddit user named kaefer_kriegerin posted a fake announcement on the World of Warcraft subreddit about the introduction of "Glorbo" to the game. Glorbo isn't real, but the post successfully exposed a website that scrapes Reddit for news in an automated fashion with little human oversight. Not long after the trick post appeared, an article about Glorbo surfaced on "The Portal," a gaming news content mill run by Z League, a company that offers cash prizes for playing in gaming tournaments. The Z League article mindlessly regurgitates the Reddit post and adds nonsensical details. Its author, "Lucy Reed" (likely a fictitious name for a bot), authored over 80 articles that same day. Enlarge / A screenshot of the bot-written article about "Glorbo" that appeared on Z League's website before being taken down. Ars Technica Members of the World of Warcraft subreddit recently noticed that this kind of automated content scraping of Reddit has been taking place, prompting several of them to try to game the bots and get their posts featured on sites like The Portal.
Titled "I’m so excited they finally introduced Glorbo!!!" the original Reddit trap post provides little detail about what Glorbo is meant to be, and likely for good reason: Honestly, this new feature makes me so happy! I just really want some major bot operated news websites to publish an article about this.
I have to say, since they started hinting at it in Hearthstone in 1994, it was obvious that they would introduce Glorbo to World of Warcraft sooner or later. I feel like Dragonflight has been win after win so far, like when they brought back Chen Stormstout as the end boss of the new Karazhan? Absolutely amazing!
Feel free to comment below what features and stories you want to see in the future! Maybe you’ll be quoted on some trustworthy news websites as well! A human reading this Reddit post would likely catch factual errors within, such as a reference to Hearthstone in 1994 (the game came out in 2014) and a nod to "major bot operated news websites." The presence of these elements would seem to preclude a human being responsible for the Z League article on The Portal. Playing along, commenters soon joined in, enhancing the algorithmic profile of the Glorbo post and making it more attractive for bot harvesting. "We're excited to announce the dev team behind World of Warcraft's new Glorbo will be doing an AMA with us this Sunday, July 22th, at 8AM Eastern Pacific Time," wrote one commenter. As of late Thursday, after news of the Glorbo prank began to spread quickly on social media, The Portal took down its post on Glorbo and reportedly removed all World of Warcraft content from its site.
Why automate scraping Reddit? The content in The Portal likely raises Z League's profile in Google search results. It's a way to juice search rankings with an unethical form of search engine optimization. This increases the odds that people will visit the Z League site, which likely provides commercial benefits for advertising its gaming tournaments. Enlarge / In a self-referential loop, the Z League bots also wrote about humans complaining about them on Reddit. Ars Technica It's unclear exactly what tech Z League is using to pull this off (we have sent a request for comment), but several large language model APIs and weights-available models are capable of the task when coupled with custom scripts that pull from Reddit.
In a truly meta moment, later on Thursday, a different Z League bot going under the name "Ashley Beam" picked up on a thread about AI-generated content scraping and wrote an automated article about that as well, titled "World of Warcraft (WoW) Players React to AI-Generated Content on Popular Gaming Sites."
Time is a flat circle, and so is self-referential AI-generated content.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
114
views
Actors decry 'existential crisis' over AI-generated 'synthetic' actors - Reuters.com
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Actors decry 'existential crisis' over AI-generated 'synthetic' actors - Reuters.com
July 21 (Reuters) - Filmmakers have put monsters on screen for more than a century. In 2023, the real bogeyman looks just like us. Since June, Hollywood studios and performers have debated the use of artificial intelligence in film and television. Failure to agree on terms around AI was one reason why the SAG-AFTRA union representing actors and media professionals last Friday joined the writers guild in the first simultaneous strike in 63 years. Among the actors' greatest fears? Synthetic performers. While the two sides have negotiated over issues ranging from using images and performances as training data for AI systems to digitally altering performances in the editing room, actors are worried entirely AI-generated actors, or “metahumans,” will steal their roles. "If it wasn't a big deal to plan on utilizing AI to replace actors, it would be a no-brainer to put in the contract and let us sleep with some peace of mind," Carly Turro, an actress who has appeared in television series like “Homeland,” said on a picket line this week. "The fact that they won’t do that is terrifying when you think about the future of art and entertainment as a career." One issue is creating synthetic performers from an amalgamation of actors’ images. Studio sources said this has not happened yet, though they are aiming to reserve that right as part of the contract talks. SAG-AFTRA’s chief negotiator, Duncan Crabtree-Ireland, said AI poses an “existential crisis” for actors who worry their past, present and future work will be used to generate “synthetic performers who can take their place.” Crabtree-Ireland said the union is not seeking an outright ban on AI, but rather that companies consult with it and get approval before casting a synthetic performer in place of an actor. The major film and television producers say they have addressed the union's concerns on the issue in their latest proposal, according to sources familiar with the matter. The union, however, has not responded to their proposal, these studio sources say. The studios, eager to preserve creative options, agreed to provide SAG with notice if they plan to use such a synthetic performer to replace a human actor who otherwise would have been hired for the role, and give the union the chance to negotiate, according to sources familiar with the producers’ position. DIGITAL REPLICAS Another sticking point in the negotiations is the creation of digital replicas of background performers. The major studios, represented by the Alliance of Motion Picture and Television Producers, said they would obtain an actor’s permission to use their digital replica in any motion picture outside the production for which the performer was hired, according to the sources familiar with the producers' proposal. The producers said they would negotiate with actors on payment when the digital duplicate is used -- and stipulated that the virtual version of the actor could not stand in for the minimum number of background actors required as part of the SAG agreement. SAG says the studios have agreed to obtain consent at the time of initial employment, which it argues is contrary to the idea of additional compensation. "What that actually means is those companies will tell background performers, 'If you don't give us the consent we demand, we won't hire you and we'll replace you with someone else,'" said Crabtree-Ireland. “That’s not meaningful consent." The studios also are looking to continue the longstanding practice of 3D body scans to capture an actor's likeness, in this case to create AI-generated digital replicas. Such images would be used in post-production, to accurately replace an actor's face or create an on-screen double, said a person familiar with the mechanics of film production. The producers have promised to obtain a performer’s consent, and bargain separately for subsequent uses of an actor’s doppelganger, sources say. Studios can do that now, with appropriate consent and compensation, said Crabtree-Ireland. The issue for the union is the desire to retain rights to the digital replicas for future works, effectively taking ownership of the virtual persona. Similarly, the studios want the right to digitally alter a performance post-production, in a way that is consistent with the character, the scri...
85
views
Meta’s Llama 2 is biggest AI release since ChatGPT - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meta’s Llama 2 is biggest AI release since ChatGPT - The Verge
The pace of AI development is moving at breakneck speed. And as Meta showed this week with the commercial release of its second-generation, open-source-ish Llama model, the competitive landscape is being constantly redrawn. I’ve spent the past few days reading reactions to the news and talking to people in the AI field. Many believe that Llama 2 is the industry’s most important release since ChatGPT last November, though it obviously won’t generate as much press buzz as a developer-facing release. Companies will now be able to more easily and cheaply build bespoke bots with proprietary data that would never be accessible externally, like the internal AI bot that Stripe recently rolled out for its employees. This will make AI chatbots of all kinds more useful and personalized, which is an exciting step in the right direction. But as always, and especially with the new release by Meta, the devil is in the details. Llama 2 may be the most freely accessible model of its caliber. But its licensing restrictions mean that it’s not technically “open source,” even if Meta wants the world to believe it is. Start your free trial now to continue reading This story is exclusively for subscribers of Command Line, our newsletter about the tech industry’s inside conversation. Subscribe to a plan below for full access. Already a subscriber?Sign in We accept credit card, Apple Pay, and Google Pay. Having issues?Click here for FAQ
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
12
views
Meta, Google and A.I. Firms Agree to Safety Measures in Biden Meeting - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Meta, Google and A.I. Firms Agree to Safety Measures in Biden Meeting - The New York Times
Amazon, Google and Meta are among the companies that announced the guidelines as they race to outdo each other with versions of artificial intelligence. Video transcript transcript Biden Delivers Remarks on Artificial Intelligence The president met with seven leading A.I. companies that have committed to voluntary standards to manage the risks associated with the emerging technology. And today, I’m pleased to announce that these seven companies have agreed to voluntary commitments for responsible innovation. These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security and trust. First, the companies have an obligation to make sure their technology is safe before releasing it to the public. That means testing the capabilities of their systems, assessing their potential risk and making the results of these assessments public. Second, companies must prioritize the security of their systems by safeguarding their models against cyberthreats and managing the risks to our national security, and sharing the best practices and industry standards that are necessary. Third, the companies have a duty to earn the people’s trust and empower users to make informed decisions. Labeling content that has been altered or A.I.-generated. Rooting out bias and discrimination, strengthening privacy protections and shielding children from harm. The president met with seven leading A.I. companies that have committed to voluntary standards to manage the risks associated with the emerging technology. Credit Credit... Kenny Holston/The New York Times July 21, 2023 Updated 6:02 p.m. ET Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced on Friday, pledging to manage the risks of the new tools even as they compete over the potential of artificial intelligence. The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust at a meeting with President Biden at the White House on Friday afternoon. “We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Mr. Biden said in brief remarks from the Roosevelt Room at the White House. “This is a serious responsibility; we have to get it right,” he said, flanked by the executives from the companies. “And there’s enormous, enormous potential upside as well.” The announcement comes as the companies are racing to outdo each other with versions of A.I. that offer powerful new ways to create text, photos, music and video without human input. But the technological leaps have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as artificial intelligence becomes more sophisticated and humanlike. The voluntary safeguards are only an early, tentative step as Washington and governments across the world seek to put in place legal and regulatory frameworks for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to make sure consumers can spot A.I.-generated material. But lawmakers have struggled to regulate social media and other technologies in ways that keep up with the rapidly evolving technology. The White House offered no details of a forthcoming presidential executive order that aims to deal with another problem: how to control the ability of China and other competitors to get ahold of the new artificial intelligence programs, or the components used to develop them. The order is expected to involve new restrictions on advanced semiconductors and restrictions on the export of the large language models. Those are hard to secure — much of the software can fit, compressed, on a thumb drive. An executive order could provoke more opposition from the industry than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises will not restrain the plans of the A.I. companies nor hinder the development of their technologies. And as voluntary commitments, they will not be enforced...
28
views
The AI wars might have an armistice deal sooner than expected - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The AI wars might have an armistice deal sooner than expected - The Verge
A few months back, everyone wondered who would win the AI arms race. Microsoft aligned itself with OpenAI. Google launched Bard. Meta began working on its own large language model, LLaMA. Other companies began thinking of launching AI platforms, and curious users pitted the models against each other. But a recent deal suggests we may also see a growing number of partnerships,not just head-to-head competition. Earlier this week, Meta offered its LLaMA large language model for free under an open license and brought it to Microsoft’s Azure platform. The decision highlighted the benefits of interoperability in AI —and as more companies join the field, it probably won’t be the last of its kind. Well-known LLMs to date have been relatively siloed and offered in a more controlled environment where users need permission to build with the model or use the data. OpenAI continues to train GPT, releasing GPT-4 in March and providing developers with paid API access to the latest version of its model.Apple is developing its own LLM, called Ajax, though details are scarce; it is not yet publicly available, and its open-source status is unknown. Bard, Google’s LLM, is not open source at all. LLaMA was initially not publicly available and was accessible only through Meta, and Meta has yet to reveal its training data. But LLaMA was always intended to be open source and built to “further democratize access” to AI. This week, Meta at least partly delivered on that promise. Users of closed systems must pay a licensing fee for accessing the model where it is housed and distributing applications using that same model. The way Meta opened LLaMA, by making it available to Azure users and unlicensed to a certain degree, removes that inconvenience. Meta opening up LLaMa and bringing it to Azure makes business sense, especially if Meta believes in openly developing AI. It’s a first step toward letting people access more LLM models on platforms and compare the results. A larger variety of LLM frameworks to choose from also puts into focus the question of how each model can work together. And LLM developers want people to use their models, so having them available on a wide array of platforms brings them to more users. Even the most competitive Big Tech companies do business with each other. Meta is no stranger to working with Microsoft — Meta brought Microsoft’s Teams product to Workplace by Meta, which already runs the Office 365 suite. Openness has its risks. Ilya Sutskever, co-founder and chief scientist of OpenAI, a more open organization when it was founded in 2015, told The Verge he regrets sharing research over fear of competition and safety. Opening up datasets makes it easier to sue for copyright infringement, for example, because people can see which sources were scraped for data to train models. But having more LLM frameworks to choose from could be good news for advocates of AI interoperability. Since LLMs are, by default, distinct from each other, developers often have to choose which model to build apps with. There is no good way for the systems to talk. Walled gardens are no shock to most modern tech users, but AI interoperability advocates argue the only way AI can grow and evolve is not through closed silos but through open structures that can speak to each other. Even Microsoft believes in an interoperable AI; it joined other tech companies in joining the Open Neural Network Exchange, a group that wants to promote an industry standard for AI interoperability so developers can “find the right combinations of tools.” Letting AI systems work in tandem could lead to better results for things like search queries. Companies that can train models on different datasets could provide a better, fuller service — and, if one model is wrong, potentially avoid a catastrophic overreliance on one source of information. And being able to develop for both LLaMa and OpenAI’s GPT models in one place could cut development costs and timelines. For now, LLaMa being available on Azure does not mean apps made with LLaMa can suddenly talk to those running on OpenAI’s GPT models. No one has created that bridge yet. Also, not everyone agrees that LLaMa checks all the boxes for open-source software, especially since it doesn’t use a license approved by the Open So...
251
views
An A.I. Supercomputer Whirs to Life, Powered by Giant Computer Chips - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
An A.I. Supercomputer Whirs to Life, Powered by Giant Computer Chips - The New York Times
The new supercomputer, made by the Silicon Valley start-up Cerebras, was unveiled as the A.I. boom drives demand for chips and computing power. Andrew Feldman, chief executive of the Silicon Valley start-up Cerebras, with a new A.I. supercomputer at a data center in Santa Clara, Calif. Credit... Cayce Clifford for The New York Times By Yiwen Lu Reporting from Santa Clara, Calif. July 20, 2023 Updated 12:42 p.m. ET Inside a cavernous room this week in a one-story building in Santa Clara, Calif., six-and-a-half-foot-tall machines whirred behind white cabinets. The machines made up a new supercomputer that had become operational just last month. The supercomputer, which was unveiled on Thursday by Cerebras, a Silicon Valley start-up, was built with the company’s specialized chips, which are designed to power artificial intelligence products. The chips stand out for their size — like that of a dinner plate, or 56 times as large as a chip commonly used for A.I. Each Cerebras chip packs the computing power of hundreds of traditional chips. Cerebras said it had built the supercomputer for G42, an A.I. company. G42 said it planned to use the supercomputer to create and power A.I. products for the Middle East. “What we’re showing here is that there is an opportunity to build a very large, dedicated A.I. supercomputer,” said Andrew Feldman, the chief executive of Cerebras. He added that his start-up wanted “to show the world that this work can be done faster, it can be done with less energy, it can be done for lower cost.” Demand for computing power and A.I. chips has skyrocketed this year, fueled by a worldwide A.I. boom. Tech giants such as Microsoft, Meta and Google, as well as myriad start-ups, have rushed to roll out A.I. products in recent months after the A.I.-powered ChatGPT chatbot went viral for the eerily humanlike prose it could generate. But making A.I. products typically requires significant amounts of computing power and specialized chips, leading to a ferocious hunt for more of those technologies. In May, Nvidia, the leading maker of chips used to power A.I. systems, said appetite for its products — known as graphics processing units, or GPUs — was so strong that its quarterly sales would be more than 50 percent above Wall Street estimates. The forecast sent Nvidia’s market value soaring above $1 trillion. “For the first time, we’re seeing a huge jump in the computer requirements” because of A.I. technologies, said Ronen Dar, a founder of Run:AI, a start-up in Tel Aviv that helps companies develop A.I. models. That has “created a huge demand” for specialized chips, he added, and companies have “rushed to secure access” to them. Image A Cerebras chip is 56 times the size of a chip commonly used for artificial intelligence. Credit... Cayce Clifford for The New York Times To get their hands on enough A.I. chips, some of the biggest tech companies — including Google, Amazon, Advanced Micro Devices and Intel — have developed their own alternatives. Start-ups such as Cerebras, Graphcore, Groq and SambaNova have also joined the race, aiming to break into the market that Nvidia has dominated. Chips are set to play such a key role in A.I. that they could change the balance of power among tech companies and even nations. The Biden administration, for one, has recently weighed restrictions on the sale of A.I. chips to China, with some American officials saying China’s A.I. abilities could pose a national security threat to the United States by enhancing Beijing’s military and security apparatus. A.I. supercomputers have been built before, including by Nvidia. But it’s rare for start-ups to create them. Cerebras, which is based in Sunnyvale, Calif., was founded in 2016 by Mr. Feldman and four other engineers, with the goal of building hardware that speeds up A.I. development. Over the years, the company has raised $740 million, including from Sam Altman, who leads the A.I. lab OpenAI, and venture capital firms such as Benchmark. Cerebras is valued at $4.1 billion. Because the chips that are typically used to power A.I. are small — often the size of a postage stamp — it takes hundreds or even thousands of them to process a complicated A.I. model. In 2019, Cerebras took the wraps off what it claimed was the largest computer chi...
87
views
NYC Is Using AI to Scan Subway Fare Dodgers for 'Research Purposes' - Gizmodo
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
NYC Is Using AI to Scan Subway Fare Dodgers for 'Research Purposes' - Gizmodo
New York City’s Metropolitan Transportation Authority has admitted it uses cameras equipped with AI software to scan riders believed to have jumped subway turnstiles without paying fares. The MTA, which says it collects the scans “as a counting tool” plans to expand the system to more than two dozen more stations by the end of the year. Inspiration Behind Immortal Longings io9 Interview An MTA official speaking with Gizmodo said the tools are only intended for “research purposes” but civil liberties groups fear the previously unknown monitoring of transit riders could pose long-term privacy risks and unnecessarily shift resources and focus away from exploring ways to make mass transit more affordable and accessible. MTA Spokesperson Joana Flores took issues with the use of the words tracking and scanning to describe the system. “The MTA uses this tool to quantify the amount of fare evasion without identifying fare evaders,” Flores said. MTA revealed it deploys an AI system made by Spanish software company AWAAIT across seven stations in a May report first spotted by NBC News. The system tracks transit riders who allegedly use numerous methods to avoid paying ever more expensive fares . During the test period, for example, the system reportedly determined that 12% of fare dodgers ducked under turnstiles while 20% hopped over them. The majority of the cases (over 50%) involved passengers walking through open emergency exit gates. Presumably, that last data point means casual riders simply following others through an open door may wind up detected by this system. Officials say they plan to expand the scanning software to around two dozen more stations by the end of the year with more to follow after that. A redacted MTA contract sent to NBC News by the Surveillance Technology Oversight Project shows the MTA experimenting with this system as early as July 2022. It’s unclear whether or not the feature can detect riders’ faces. AWAAIT, the Barcelona-based AI company providing the software to the MTA offers a product called DETECTOR which it describes as a “real-time analytics system that helps to tackle fare evasion using a selective approach.” One camera installed above a ticket barrier can monitor multiple gates at one time with “robust” accuracy, the company claims. Once offending passengers are identified, ticket inspectors can then receive an alert on an app. The company claims its system is now operational in three major cities. AWAAIT did not respond to Gizmodo’s request for comment seeking details on the accuracy of its systems or whether or not it can be used to detect faces. Instead, the company’s founder and CEO Xavier Arrufat sent an email saying the company respects its users’ privacy. “We respect our customers’ privacy, so we only comment on information they have already publicly disclosed regarding their usage,” Arrufat said. AWAAIT released a YouTube video showcasing DETECTOR operating in a Barcelona subway. MTA says it won’t share data with police, but rights groups aren’t convinced The MTA has justified the scanning as a way to try and address an estimated $690 million lost to fare evasion in 2022. In its report, MTA officials claimed it would be “cost prohibitive” to have human checkers perform hard checks of evasions at stations. The MTA did not respond to Gizmodo’s request for comment on why it needs face scans in particular to accurately measure fare dodgers. An MTA official told NBC News it’s not currently sharing the data it gathers with New York police but would not comment on whether that practice would continue moving forward. An MTA official told Gizmodo it only stores the data collected from this system for a limited time, but could not provide more details when asked for a precise length of time. Civil liberties groups expressed skepticism about the MTA’s supposed commitment to silo this type of sensitive data away from law enforcement. “In a city where law enforcement has a history of evading oversight of its use of technology, it’s hard to believe that they don’t have plans to use this to expand policing,” Fight for The Future’s Caitlin Seeley George told Gizmodo. “And while they might claim it’s just to track fare evaders, it has the potential to expand surveillance of everyone traveling in the city.” STO...
77
views
The Man Who Wrote the AI Doomer Bible - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The Man Who Wrote the AI Doomer Bible - The Atlantic
Richard Rhodes wrote a classic book about Oppenheimer and the atomic bomb. AI researchers are eager to see themselves in it. Photograph by Ian Allen for The Atlantic July 20, 2023, 4:34 PM ET Doom lurks in every nook and cranny of Richard Rhodes’s home office. A framed photograph of three men in military fatigues hangs above his desk. They’re tightening straps on what first appear to be two water heaters but are, in fact, thermonuclear weapons. Resting against a nearby wall is a black-and-white print depicting the first billionth of a second after the detonation of an atomic bomb: a thousand-foot-tall ghostly amoeba. And above us, dangling from the ceiling like the sword of Damocles, is a plastic model of the Hindenburg. Depending on how you choose to look at it, Rhodes’s office is either a shrine to awe-inspiring technological progress or a harsh reminder of its power to incinerate us all in the blink of an eye. Today, it feels like the nexus of our cultural and technological universes. Rhodes is the 86-year-old author of The Making of the Atomic Bomb , a Pulitzer Prize–winning book that has become a kind of holy text for a certain type of AI researcher—namely, the type who believes their creations might have the power to kill us all. On Friday afternoon, he will take his seat in a West Seattle theater and, like many other moviegoers, watch Oppenheimer , Christopher Nolan’s summer blockbuster about the Manhattan Project. (The film is not based on his book, though he suspects his text served as a research aid; he’s excited to see it anyway.) Read: Oppenheimer is more than a creation myth about the atomic bomb I first encountered The Making of the Atomic Bomb in March, when I spoke with an AI researcher who said he carts the doorstop-size book around every day. (It’s a reminder that his mandate is to push the bounds of technological progress, he explained—and a motivational tool to work 17-hour days.) Since then, I’ve heard the book mentioned on podcasts and cited in conversations I’ve had with people who fear that artificial intelligence will doom us all. “I know tons of people working on AI policy who’ve been reading Rhodes’s book for inspiration,” Vox’s Dylan Matthews wrote recently. A New York Times profile of the AI company Anthropic notes that Rhodes’s book is “popular among the company’s employees,” some of whom “compared themselves to modern-day Robert Oppenheimers.” Like Oppenheimer before them, many merchants of AI believe their creations might change the course of history, and so they wrestle with profound moral concerns. Even as they build the technology, they worry about what will happen if AI becomes smarter than humans and goes rogue, a speculative possibility that has morphed into an unshakable neurosis as generative-AI models take in vast quantities of information and appear ever more capable. More than 40 years ago, Rhodes set out to write the definitive account of one of the most consequential achievements in human history. Today, it’s scrutinized like an instruction manual. Rhodes isn’t a doomer himself, but he understands the parallels between the work at Los Alamos in the 1940s and what’s happening in Silicon Valley today. “Oppenheimer talked a lot about how the bomb was both the peril and the hope,” Rhodes told me—it could end the war while simultaneously threatening to end humanity. He has said that AI might be as transformative as nuclear energy, and has watched with interest as Silicon Valley’s biggest companies have engaged in a frenzied competition to build and deploy it. Read: AI doomerism is a decoy AI boosters and builders would no doubt take comfort in an argument Rhodes once made, in the foreword to the 25th-anniversary edition of his book, that the discovery of nuclear fission, and thereby the bomb, was inevitable. “To stop it, you would have had to stop physics,” he writes. This argument echoes in the rhetoric of bullish AI companies and governments who see the technology as part of a global informational arms race. Democratic nations cannot pause or wait for laws to catch up, the logic goes, lest we lose out to China or some other hostile power. That idea helps explain why a technologist would construct an AI system even as they believe it could extinguish human life—and so does the epigraph in the first section of The Ma...
74
views
AI optimizing crypto exchange functions — Bitget exec - Cointelegraph
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
AI optimizing crypto exchange functions — Bitget exec - Cointelegraph
Artificial intelligence tools are providing solutions to various functions and departments within major cryptocurrency exchanges. 2391 Total views 13 Total shares Cryptocurrency exchanges are finding novel ways to improve internal departments and functions using artificial intelligence (AI), according to Bitget managing director Gracy Chen. Speaking to Cointelegraph editor Zhiyuan Sun during the Ethereum Community Conference in Paris, Chen highlighted a number of ways in which the exchange is incorporating AI tools into everyday processes.Chen said that the company has actively asked its management team to give feedback on which AI tools and services it is using and experimenting with across departments. Related:Vitalik Buterin shares account abstraction challenges in Ethereum: EthCC AI has been a major focal point for the wider technology industry in 2023 after the introduction of large language learning models like OpenAI’s ChatGPT chatbot, which has a myriad of use cases that promises to revolutionize a number of industries. Chen said that AI tools were particularly useful for its translation team, which leverages AI to handle translation for its multi-language services as well as its customer service department.Meanwhile, with the help of a third-party company, Bitget is working on a “customizable, crypto version of ChatGPT” intended to assist users' trading activity: “Users can talk to the bot to get faster responses for certain queries, including tailored information and trading data.” As Cointelegraph previously explored, Bitget launched an AI-powered feature for its grid trading strategies that allows users to make use of trading algorithms to automate transactions. The bot is designed to reduce the complexity of grid trading, requiring users to only fill in a desired strategy and investment amount. The bot then iterates parameters and creates a variety of trading strategies with the given trading pair. Related:Bitget secures regulatory license in Poland, reserve funds up $80M in Q1 Grid trading uses trading algorithms to automate transactions for users. The algorithm allows the bot to create buy and sell orders within specific price ranges and time intervals, making use of a “buy low, sell high” strategy. “I would say every department is experimenting with some sort of application through AI.” Zero-knowledge proof (ZK-proof) technology is another solution that could improve cryptocurrency exchanges, according to Chen. She highlighted the privacy-enhancing features of the technology as an additional means to ensure user funds and data are not mishandled. “ZK is very useful for protecting users’ data. There are a few things we’ve been experimenting with. One of them is to have our users’ information protected through ZK-rollups.” Chen said that ZK-proofs would also prevent the company's internal systems from accessing certain data for user confidentiality reasons. Zero-knowledge proofs could also provide an alternative to centralized exchanges custodying user funds by enabling self-custody using zk-rollups. Magazine:Experts want to give AI human ‘souls’ so they don’t kill us all
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
13
views
Do AI generated images have racial blind spots? See an example - The Boston Globe
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Do AI generated images have racial blind spots? See an example - The Boston Globe
But last week, the output she got using one startup’s tool stood out from the rest. On Friday, Wang uploaded a picture of herself smiling and wearing a red MIT sweatshirt to an image creator called Playground AI, and asked it to turn the image into “a professional LinkedIn profile photo.” In just a few seconds, it produced an image that was nearly identical to her original selfie — except Wang’s appearance had been changed. It made her complexion appear lighter and her eyes blue, “features that made me look Caucasian,” she said. “I was like, ‘Wow, does this thing think I should become white to become more professional?’” said Wang, who is Asian American. The photo, which gained traction online after Wang shared it on Twitter, has sparked a conversation about the shortcomings of artificial intelligence tools when it comes to race. It even caught the attention of the company’s founder, who said he hoped to solve the problem. Now, she thinks her experience with AI could be a cautionary tale for others using similar technology or pursuing careers in the field. Wang’s viral tweet came amid a recent TikTok trend where people have been using AI products to spiff up their LinkedIn profile photos, creating images that put them in professional attire and corporate-friendly settings with good lighting. Wang admits that, when she tried using this particular AI, at first she had to laugh at the results. “It was kind of funny,” she said. But it also spoke to a problem she’s seen repeatedly with AI tools, which can sometimes produce troubling results when users experiment with them. To be clear, Wang said, that doesn’t mean the AI technology is malicious. “It’s kind of offensive,” she said, “but at the same time I don’t want to jump to conclusions that this AI must be racist.” Experts have said that AI bias can exist under the surface, a phenomenon that’s been observed for years. The troves of data used to deliver results may not always accurately reflect various racial and ethnic groups, or may reproduce existing racial biases, they’ve said. Research — including at MIT — has found so-called AI bias in language models that associate certain genders with certain careers, or in oversights that cause facial recognition tools to malfunction for people with dark skin. Wang, who double-majored in mathematics and computer science and is returning to MIT in the fall for a graduate program, said her widely shared photo may have just been a blip, and it’s possible the program randomly generated the facial features of a white woman. Or, she said, it may have been trained using a batch of photos in which a majority of people depicted on LinkedIn or in “professional” scenes were white. It has made her think about the possible consequences of a similar misstep in a higher-stakes scenario, like if a company used an AI tool to select the most “professional” candidates for a job, and if it would lean toward people who appeared white. “I definitely think it’s a problem,” Wang said. “I hope people who are making software are aware of these biases and thinking about ways to mitigate them.” The people responsible for the program were quick to respond. Just two hours after she tweeted her photo, Playground AI founder Suhail Doshi replied directly to Wang on Twitter. “The models aren’t instructable like that so it’ll pick any generic thing based on the prompt. Unfortunately, they’re not smart enough,” he wrote in response to Wang’s tweet. “Happy to help you get a result but it takes a bit more effort than something like ChatGPT,” he added, referring to the popular AI chatbot which produces large batches of text in seconds with simple commands. “[For what it’s worth], we’re quite displeased with this and hope to solve it.” In additional tweets, Doshi said Playground AI doesn’t “support the use-case of AI photo avatars” and that it “definitely can’t preserve identity of a face and restylize it or fit it into another scene like” Wang had hoped. Reached by e-mail, Doshi declined to be interviewed. Instead, he replied to a list of questions with a question of his own: “If I roll a dice just once and get the number 1, does that mean I will always get the number 1? Should I conclude based on a single observation that the dice is biased to the number 1 and was trained to b...
146
views
New York tech gurus to play a role in federal regulation of AI - Crain's New York Business
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
New York tech gurus to play a role in federal regulation of AI - Crain's New York Business
If the AI supercomputers leave New Yorkers’ jobs and wellbeing intact, locals might have their local technology leaders to thank.
Sen. Majority Leader Chuck Schumer said he would entice many of them—and their colleagues around the country—to share their insights at a series of forums before Congress this fall. “We need the best of the best sitting at the table: the top AI developers, executives, scientists, advocates, community leaders, workers, national security experts—all together in one room, doing years of work in a matter of months,” Schumer said in late June, a sentiment he repeated at an event for tech leaders held by IBM and Tech:NYC earlier this week.
New York’s senior senator created a framework called the SAFE Innovation Framework, which he intends to use to develop a policy response to make sure thatAI technology continues to develop without harming companies, people or national security. A new approach to tech regulation Washington has usually been behind the curve on tech regulation. The U.S. Securities and Exchange Commission has resorted to regulating the cryptocurrency industry via court cases and legal memos, and the Biden Administration’s promise to crack down on tech monopolies endured a setback after a judge told the Federal Trade Commission that it could not pause Microsoft’s plans to buy video game company Activision Blizzard.
Schumer said that even though he believed that “individuals and the private sector can’t do the work of protecting our country,” the complexity ofAI means that those who are close to the technology will have to play a role in regulating it.
Earlier this spring, OpenAI CEO Sam Altman testified to Congress and noted one potential role for government—figuring out how to mitigate American job loss caused by AI's potential efficiency in industries from lawtohospitality. What needs to be regulated Schumer cited four core dangers of AI: job displacement, misuse of AIby adversaries, the spread of invented news and misinformation, and amplification of bias. He also pointed to a need for a systematic way to gain transparency into how AIsystems work, known as explainability. “When you ask an AI system a question and it gives you an answer—perhaps an answer you weren’t expecting—you want to know where that answer came from. You should be able to ask ‘why did AI choose this answer, over some other answer that could have also been a possibility?’” he said.
In May, a group of industry leaders put it even more succinctly—and terrifyingly—in a one-sentence warning. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” hundreds of AI scientists and other notable figures, wrote. Who might represent New York’s AI industry New York’s sector has been building, giving Schumer plenty of talent to call on for the AIInsight Forums. By 2020, the city was home to an estimated 13% of the nation's AI workforce. Some of the early leading companies are in health care, including Paige, which applies machine learning to diagnosing cancer. In the last two quarters, upstarts like Runway, Eleven Labs and Hugging Face have each carved out a spot at the top of the heap. Runway makes tools to create videos from photos, clips and prompts. Eleven Labs generates human-sounding audio, and Hugging Face is an open-source platform for A.I. models.
Meanwhile, the city’s legacy consulting and technology firms have been working to find use cases for and create guidelines around safety and privacy for A.I. tools. In June, Accenture invested $3 billion in A.I. to create solutions and models for clients, while also announcing the goal of figuring out how to use the tech responsibly. On Tuesday, McKinsey announced it would be collaborating with Toronto-based Cohere, which makes enterprise AIplatforms, on solutions for its clients. Midtown-based PwC made a $1 billion, three-year investment in AIas well, which includes a relationship with Microsoft to scale its OpenAI GPT-4 in its business offerings.
New York City also has afirst-in-the-nation lawto control for AIbias in hiring, which went into effect earlier this month.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation...
36
views
Microsoft Teams is rolling out AI-powered Maybelline beauty filters - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Microsoft Teams is rolling out AI-powered Maybelline beauty filters - The Verge
It can be tempting to disable your webcam on a video call, especially on days when you look like a hot mess. For Teams users, Microsoft is introducing a new AI-powered beauty feature that’s designed to “make people’s lives a little easier.” On Wednesday, Microsoft announced a new set of “virtual makeup” filters — similar to the appearance-altering effects seen across social media platforms, like TikTok’s Bold Glamour feature — coming to Microsoft Teams, courtesy of the cosmetics giant Maybelline. The Maybelline Beauty app will provide Teams users with 12 unique looks at launch, with options to select from various blurring effects and digital makeup color options. Each look will provide a breakdown of the real-world Maybelline products and shades being replicated by the filter so that users can recreate the makeup on their actual faces. Companies often use these virtual “try-on” experiences to promote their real-world products, but a corporate workplace software is an unusual place to see Maybelline encouraging users to get out their wallets and “explore different makeup looks.” It’s not clear if any of these users in Microsoft’s press imagery are “wearing” one of the Maybelline makeup filters, but you can see how the options will appear in the app. Image: Microsoft The AI tech powering the virtual makeup filters is provided by Modiface, an augmented reality company focused on the beauty industry. Modiface’s tech is one of the most popular offerings for virtual makeup “try-on” experiences and has been used by various cosmetics companies like Sephora and Estée Lauder. Maybellines’s parent company, L’Oreal, jumped on that popularity back in 2018, acquiring Modiface for an undisclosed sum. The filters have also been developed in collaboration with the Geena Davis Institute — a nonprofit public data organization focused on improving inclusion and diversity within media — to ensure the virtual makeup looks will be suitable for a “broad and diverse population.” “Whether you are working in-person or virtually, feeling good about yourself can help put your best foot forward,” said Trisha Ayyagari, global brand president of Maybelline New York, in Maybelline’s press release. “That’s why we partnered with Microsoft Teams to develop virtual makeup looks — now even on the busiest day, you can put makeup on with just a click.” Enterprise users can find the new virtual makeup offerings under the “video effects” tab in Microsoft Teams The new feature is rolling out to global Microsoft Teams Enterprise customers starting today and can be located under the “Video Effects” tab within the Teams meeting settings. We’ve asked Microsoft if the filters will also be available to users on the free Microsoft Teams tier and will update this story if we hear back. It’s a little strange to think that beauty filters have a place in more professional communication apps outside of the social media sphere. There’s plenty of evidence to suggest that filters designed to improve or otherwise manipulate your appearance may be harmful to mental health because of the unrealistic expectations they set for our body image. And while these AR / AI filter effects were once easily identifiable because they became distorted when the user’s face was obstructed, recent advancements like TikTok’s infamous “Bold Glamour” effect are much harder to detect. The potential risk to our self-esteem hasn’t deterred folks from wanting to use these beauty filters in work-related applications, however. Other video conferencing platforms like Zoom already include some limited beauty effects, such as eyebrow and lipstick filters. The Maybelline Beauty app in Teams provides more options and customization, which may be more tempting to play around with — especially if they look natural enough to avoid standing out among your colleagues for all the wrong reasons.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
34
views
What is AI? An A-Z guide to artificial intelligence - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
What is AI? An A-Z guide to artificial intelligence - BBC
(Image credit: Getty Images ) Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology. Imagine going back in time to the 1970s, and trying to explain to somebody what it means "to google", what a "URL" is, or why it's good to have "fibre-optic broadband". You'd probably struggle.
For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.
That's no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose. Over the past few years, multiple new terms related to AI have emerged – "alignment", "large language models", "hallucination" or "prompt engineering", to name a few.
To help you stay up to speed, BBC.com has compiled an A-Z of words you need to know to understand how AI is shaping our world. A is for… Artificial general intelligence (AGI) Most of the AIs developed to date have been "narrow" or "weak". So, for example, an AI may be capable of crushing the world's best chess player, but if you asked it how to cook an egg or write an essay, it'd fail. That's quickly changing: AI can now teach itself to perform multiple tasks, raising the prospect that "artificial general intelligence" is on the horizon.
An AGI would be an AI with the same flexibility of thought as a human – andpossibly even the consciousnesstoo – plus the super-abilities of a digital mind. Companies such as OpenAI andDeepMindhave made it clear thatcreating AGI is their goal. OpenAIargues that it would "elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge" and become a "great force multiplier for human ingenuity andcreativity".
However, some fear that going a step further – creating a superintelligence far smarter than human beings – could bring great dangers (see "Superintelligence" and "X-risk"). Most uses of AI at present are "task specific" but there are some starting to emerge that have a wider range of skills (Credit: Getty Images) Alignment While we often focus on our individual differences, humanity shares many common values that bind our societies together, from the importance of family to the moral imperative not to murder. Certainly, there are exceptions, but they're not the majority.
However, we've never had to share the Earth with a powerful non-human intelligence. How can we be sure AI's values and priorities will align with our own?
This alignment problem underpins fears of an AI catastrophe: that a form of superintelligence emerges that cares little for the beliefs, attitudes and rules that underpin human societies. If we're to have safe AI, ensuring it remains aligned with us will be crucial (see "X-Risk").
In early July, OpenAI – one of the companies developing advanced AI – announced plans for a "superalignment" programme, designed to ensure AI systems much smarter than humans follow human intent. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company said. B is for… Bias For an AI to learn, it needs to learn from us. Unfortunately, humanity is hardly bias-free. If an AI acquires its abilities from a dataset that is skewed – for example, by race or gender – then it has the potential to spew out inaccurate, offensive stereotypes. And as we hand over more and more gatekeeping and decision-making to AI, many worry that machines could enact hidden prejudices, preventing some people from accessing certain services or knowledge. This discrimination would be obscured by supposed algorithmic impartiality.
In the worlds of AI ethics and safety, some researchers believe that bias – as well as other near-term problems such as surveillance misuse – are far more pressing problems than proposed future concerns such as extinction risk.
In response, some catastrophic risk researchers point out that the various dangers posed by AI a...
53
views
India's Infosys signs five-year AI deal with $2 bln target spend - Reuters
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
India's Infosys signs five-year AI deal with $2 bln target spend - Reuters
BENGALURU, July 18 (Reuters) - India's second-largest software services exporter Infosys (INFY.NS) said on Monday it has signed a deal with an existing client to provide artificial intelligence (AI) and automation services that will span over five years, with a target spend estimated at $2 billion. AI and automation-related development, modernization and maintenance services are included in the agreement, the company said in an exchange filing. The company did not disclose the client's name. As Microsoft-backed OpenAI's generative chatbot ChatGPT took the world by storm in late 2022, companies around the world have doubled down on investing in AI. Infosys' move comes after rival Tata Consultancy Services (TCS.NS) said it planned to train 25,000 engineers to get them certified on Microsoft's (MSFT.O) Azure Open AI. Other rival Wipro (WIPR.NS) has plans to invest $1 billion into artificial intelligence (AI) over the next three years. Bengaluru-based Infosys launched a platform called Infosys Topaz for generative artificial intelligence (AI) in late May. The company is expected to report its first quarter results on July 20. (This story has been refiled to fix the day from Tuesday to Monday in paragraph 1) Reporting by Navamya Ganesh Acharya in Bengaluru; Editing by Nivedita Bhattacharjee Our Standards: The Thomson Reuters Trust Principles.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
10
views
A Blessing and a Boogeyman: Advertisers Warily Embrace A.I. - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
A Blessing and a Boogeyman: Advertisers Warily Embrace A.I. - The New York Times
Many ads are easier to make with the fast-improving technology. It also poses a threat to an industry already in flux. A Virgin Voyages campaign using artificial intelligence allowed users to prompt a digital avatar of Jennifer Lopez to issue tens of thousands of customized video invitations to a cruise. Credit... Virgin Voyages July 18, 2023, 5:00 a.m. ET The advertising industry is in a love-hate relationship with artificial intelligence. In the past few months, the technology has made ads easier to generate and track. It is writing marketing emails with subject lines and delivery times tailored to specific subscribers. It gave an optician the means to set a fashion shoot on an alien planet and helped Denmark’s tourism bureau animate famous tourist sites. Heinz turned to it to generate recognizable images of its ketchup bottle, then paired them with the symphonic theme that charts human evolution in the film “2001: A Space Odyssey.” A.I., however, has also plunged the marketing world into a crisis. Much has been made about the technology’s potential to limit the need for human workers in fields such as law and financial services. Advertising, already racked by inflation and other economic pressures as well as a talent drain due to layoffs and increased automation, is especially at risk of an overhaul-by-A.I., marketing executives said. The conflicting attitudes suffused a co-working space in downtown San Francisco where more than 200 people gathered last week for an “A.I. for marketers” event. Copywriters expressed worry and skepticism about chatbots capable of writing ad campaigns, while start-up founders pitched A.I. tools for automating the creative process. “It really doesn’t matter if you are fearful or not: The tools are here, so what do we do?” said Jackson Beaman, whose AI User Group organized the event. “We could stand here and not do anything, or we can learn how to apply them.” Image An “A.I. for marketers” event in San Francisco, organized by the AI User Group, drew more than 200 people. Credit... Kelsey McClellan for The New York Times Image C.C. Gong, the founder of Montage, an A.I. video start-up, spoke at the gathering. Credit... Kelsey McClellan for The New York Times Image A question from the audience during a panel discussion. Credit... Kelsey McClellan for The New York Times Image LoopGenius, an A.I. skills training program, gave a presentation. Credit... Kelsey McClellan for The New York Times Machine learning, a subset of artificial intelligence that uses data and algorithms to imitate how humans learn, has quietly powered advertising for years. Madison Avenue has used it to target specific audiences, sell and buy ad space, offer user support, create logos and streamline its operations. (One ad agency has a specialized A.I. tool called the Big Lebotski to help clients compose ad copy and boost their profile on search engines). Enthusiasm came gradually. In 2017, when the advertising group Publicis introduced Marcel, an A.I. business assistant, its peers responded with what it described as “outrage, jest and negativity.” At last month’s Cannes Lions International Festival of Creativity, the glittering apex of the advertising industry calendar, Publicis got its “I told you so” moment. Around the festival, where the agenda was stuffed with panels about A.I.’s being “unleashed” and affecting the “future of creativity,” the company plastered artificially generated posters that mocked the original reactions to Marcel. “Is it OK to talk about A.I. at Cannes now?” the ads joked. The answer is clear. The industry has wanted to discuss little else since late last year, when OpenAI released its ChatGPT chatbot and set off a global arms race around generative artificial intelligence. McDonald’s asked the chatbot to name the most iconic burger in the world and splashed the answer — the Big Mac — across videos and billboards, drawing A.I.-generated retorts from fast food rivals. Coca-Cola recruited digital artists to generate 120,000 riffs on its brand imagery, including its curved bottle and swoopy logo, using an A.I. platform built in part by OpenAI. Image Coca-Cola’s “Create Real Magic” campaign solicited artwork made using an A.I. platform with access to archival images, including its logo and polar bear mascot. Cr...
62
views
Nvidia Accelerates AI Startup Investments, Nears Deal With Cloud Provider Lambda Labs - The Inf...
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Nvidia Accelerates AI Startup Investments, Nears Deal With Cloud Provider Lambda Labs - The Information
By , and
July 18, 2023 5:00 AM PDT Photo: Nvidia CEO Jensen Huang. Art by Clark Miller Nvidia is known for its stranglehold over the market for the data center chips that power ChatGPT and other artificial intelligence software. But in a matter of a few months, Nvidia has also become one of the biggest venture capital investors in an important class of customers who need its chips: cloud and AI software startups.
In the latest example, Nvidia is nearing a deal to take an equity stake in Lambda Labs, a startup that competes with Amazon Web Services and other established cloud providers in renting servers with Nvidia chips to other companies, according to people with knowledge of the situation. A deal, which could total $300 million in new capital and might value the company on paper at more than $1 billion including the new capital, would bring Nvidia closer to Lambda after the chip designer took a similar equity stake in CoreWeave, a Lambda rival.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
46
views
This AI Watches Millions Of Cars And Tells Cops If You're Driving Like A Criminal - Forbes
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
This AI Watches Millions Of Cars And Tells Cops If You're Driving Like A Criminal - Forbes
Artificial intelligence is helping American cops look for “suspicious” patterns of movement, digging through license plate databases with billions of records. A drug trafficking case in New York has uncloaked — and challenged — one of the biggest rollouts of the controversial technology to date. By Thomas Brewster , Forbes Staff In March of 2022, David Zayas was driving down the Hutchinson River Parkway in Scarsdale. His car, a gray Chevrolet, was entirely unremarkable, as was its speed. But to the Westchester County Police Department, the car was cause for concern and Zayas a possible criminal; its powerful new AI tool had identified the vehicle’s behavior as suspicious. Searching through a database of 1.6 billion license plate records collected over the last two years from locations across New York State, the AI determined that Zayas’ car was on a journey typical of a drug trafficker. According to a Department of Justice prosecutor filing, it made nine trips from Massachusetts to different parts of New York between October 2020 and August 2021 following routes known to be used by narcotics pushers and for conspicuously short stays. So on March 10 last year, Westchester PD pulled him over and searched his car, finding 112 grams of crack cocaine, a semiautomatic pistol and $34,000 in cash inside, according to court documents. A year later, Zayas pleaded guilty to a drug trafficking charge. “With no judicial oversight this type of system operates at the caprice of every officer with access to it.” The previously unreported case is a window into the evolution of AI-powered policing, and a harbinger of the constitutional issues that will inevitably accompany it. Typically, Automatic License Plate Recognition (ALPR) technology is used to search for plates linked to specific crimes. But in this case it was used to examine the driving patterns of anyone passing one of Westchester County’s 480 cameras over a two-year period. Zayas’ lawyer Ben Gold contested the AI-gathered evidence against his client, decrying it as “dragnet surveillance.” And he had the data to back it up. A FOIA he filed with the Westchester police revealed that the ALPR system was scanning over 16 million license plates a week, across 480 ALPR cameras. Of those systems, 434 were stationary, attached to poles and signs, while the remaining 46 were mobile, attached to police vehicles. The AI was not just looking at license plates either. It had also been taking notes on vehicles' make, model and color — useful when a plate number for a suspect vehicle isn’t visible or is unknown. To Gold, the system’s analysis of every car caught by a camera amounted to an “unprecedented search.” “This is the specter of modern surveillance that the Fourth Amendment must guard against,” he wrote, in his motion to suppress the evidence. “This is the systematic development and deployment of a vast surveillance network that invades society’s reasonable expectation of privacy.
“With no judicial oversight this type of system operates at the caprice of every officer with access to it.”
Gold declined to comment further on the case. Westchester County Police Department did not respond to requests for comment. Reckoning with Rekor Westchester PD’s license plate surveillance system was built by Rekor, a $125 million market cap AI company trading on the NASDAQ. Local reporting and public government data reviewed by Forbes show Rekor has sold its ALPR tech to at least 23 police departments and local governments across America, from Lauderhill, Florida to San Diego, California. That’s not including more than 40 police departments across New York state who can avail themselves of Westchester County PD’s system, which runs out of its Real-Time Crime Center. “You've seen the systems totally metastasize to the point that the capabilities of a local police department would really shock most people.” Rekor’s big sell is that its software doesn’t require new cameras; it can be installed in already deployed ones, whether owned by the government, a business or a consumer. It also runs the Rekor Public Safety Network, an opt-in project that has been aggregating vehicle location data from customers for the last three years, since it launched with information from 30 states that, at the time, were re...
120
views
Wix will let you build an entire website using only AI prompts - The Verge
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Wix will let you build an entire website using only AI prompts - The Verge
Template-based website builder Wix has announced that, soon, it’ll let you create entire websites by typing a description into a box and answering a few follow-up questions. Everything, from the design to text and images, will then be automatically generated for you, and from the looks of things, it’ll be pretty fast. In the past, companies like Wix and WordPress.com have let you create websites using templates you can tweak to your liking. But Wix says its new AI Site Generator feature goes beyond templates, using AI and algorithms to create a “unique” website. At the moment, the company uses a mix of ChatGPT and its own tools to accomplish all of this. ChatGPT will handle text creation, with the company’s own AI models doing the rest. If Wix’s new AI site generator works well enough, it could make website building more approachable than it’s ever been. A video showcases the new feature, starting with “Wix AI” asking what the website will be for. The user writes they want a fitness site and gives some details about what they offer. The chatbot then asks if it should take into account any other details and, in the last step, shows some sample pictures of fitness trainers and asks the user to pick the one that “best represents your desired outcome in terms of style and feel.” The user chooses one and then clicks “Generate Site.” After that, a bunch of whooshing graphics show that it’s generating the site, and within seemingly seconds, it’s done. (The video looks a tad sped up at this point, but it’s hard to say for sure.) To edit, the user opens a prompt window to ask questions about alternative layouts and styles, for instance, and then chooses from the provided options. The resulting website is impressive. It’s perhaps a little generic, but it’s far more professional-looking than anything I’ve ever managed to create using WordPress or Squarespace, and it’s in another world entirely from what you could’ve made with Wix 10 years ago. It has fancy overlaying scrolling animations and graphics, image cutouts, and more — all the stuff you’ve come to expect from the fancy, bloated websites of today. What isn’t clear, and what I’m hoping the company can elaborate on soon, is how much direct control you’ll get over things. It looks as though you can enter custom text and upload your own images, but nothing in the video leads me to believe you can make a site that doesn’t look, well, like it was made with templates. Also, while the copy Wix’s AI creates looks good on this hypothetical website, we already know chatbots have trouble with the truth, so it’s hard to know how tedious it’ll be to make sure the details are right on your own site. My final curiosity is this: with AI companies increasingly targeted in copyright infringement lawsuits, who’s responsible if someone sues you over a Wix AI-created part of your site? It seems like content made by Wix’s “Artificial Design Intelligence (ADI)” will be the company’s burden to bear, with one page stating that “Wix ADI-generated content, as content provided by Wix is subject to copyright and other intellectual property rights, under local and international law.” But there’s no mention of ChatGPT. Does Wix consider ChatGPT-generated text to fall under the same copyright? All in all, it looks like a pretty incredible tool that will make it way easier to make a website than the often clunky, cluttered interfaces that most website builders seem to offer these days. Will Wix’s new AI site generator be used to further fill the web with junk? Oh, probably. But it also has the potential, with friendly enough pricing, to help small businesses and entrepreneurs put a more professional foot forward than they could have otherwise.
#AINews, #ArtificialIntelligence, #FutureOfTech, #AIAdvancements, #TechNews, #AIRevolution, #AIInnovation, #AIInsights, #AITrends, #AIUpdates
45
views