When A.I. Lies About You, There’s Little Recourse - The New York Times

9 months ago
79

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

When A.I. Lies About You, There’s Little Recourse - The New York Times

People have little protection or recourse when the technology creates and spreads falsehoods about them. Marietje Schaake, a former member of the European Parliament and a technology expert, was falsely labeled a terrorist last year by BlenderBot 3, an A.I. chatbot developed by Meta. Credit... Ilvy Njiokiktjien for The New York Times Aug. 3, 2023 Updated 11:27 a.m. ET Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments. Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true. While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background. “I’ve never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that’s happened,” Ms. Schaake said in an interview. “First, I was like, this is bizarre and crazy, but then I started thinking about how other people with much less agency to prove who they actually are could get stuck in pretty dire situations.” Artificial intelligence’s struggles with accuracy are now well documented. The list of falsehoods and fabrications produced by the technology includes fake legal decisions that disrupted a court case, a pseudo-historical image of a 20-foot-tall monster standing next to two humans, even sham scientific papers. In its first public demonstration, Google’s Bard chatbot flubbed a question about the James Webb Space Telescope. The harm is often minimal, involving easily disproved hallucinatory hiccups. Sometimes, however, the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse. Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist. One legal scholar described on his website how OpenAI’s ChatGPT chatbot linked him to a sexual harassment claim that he said had never been made, which supposedly took place on a trip that he had never taken for a school where he was not employed, citing a nonexistent newspaper article as evidence. High school students in New York created a deepfake, or manipulated, video of a local principal that portrayed him in a racist, profanity-laced rant. A.I. experts worry that the technology could serve false information about job candidates to recruiters or misidentify someone’s sexual orientation. Ms. Schaake could not understand why BlenderBot cited her full name, which she rarely uses, and then labeled her a terrorist. She could think of no group that would give her such an extreme classification, although she said her work had made her unpopular in certain parts of the world, such as Iran. Later updates to BlenderBot seemed to fix the issue for Ms. Schaake. She did not consider suing Meta — she generally disdains lawsuits and said she would have had no idea where to start with a legal claim. Meta, which closed the BlenderBot project in June, said in a statement that the research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake. Image The BlenderBot 3 exchange that labeled Ms. Schaake a terrorist. Meta said the A.I. model had combined two unrelated pieces of information to create an inaccurate sentence about her. Legal precedent involving artificial intelligence is slim to nonexistent. The few laws that currently govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court. An aerospace professor filed a defamation lawsuit against Microsoft this summer, accusing the company’s Bing chatbot of conflating his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on th...

Loading comments...