A New Frontier for Travel Scammers: A.I.-Generated Guidebooks - The New York Times

8 months ago
47

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

A New Frontier for Travel Scammers: A.I.-Generated Guidebooks - The New York Times

Aug. 5, 2023 Updated 9:42 a.m. ET In March, as she planned for an upcoming trip to France, Amy Kolsky, an experienced international traveler who lives in Bucks County, Pa., visited Amazon.com and typed in a few search terms: travel, guidebook, France. Titles from a handful of trusted brands appeared near the top of the page: Rick Steves, Fodor’s, Lonely Planet. Also among the top search results was the highly rated “France Travel Guide,” by Mike Steves, who, according to an Amazon author page, is a renowned travel writer. “I was immediately drawn by all the amazing reviews,” said Ms. Kolsky, 53, referring to what she saw at that time: universal raves and more than 100 five-star ratings. The guide promised itineraries and recommendations from locals. Its price tag — $16.99, compared with $25.49 for Rick Steves’s book on France — also caught Ms. Kolsky’s attention. She quickly ordered a paperback copy, printed by Amazon’s on-demand service. When it arrived, Ms. Kolsky was disappointed by its vague descriptions, repetitive text and lack of itineraries. “It seemed like the guy just went on the internet, copied a whole bunch of information from Wikipedia and just pasted it in,” she said. She returned it and left a scathing one-star review. Though she didn’t know it at the time, Ms. Kolsky had fallen victim to a new form of travel scam: shoddy guidebooks that appear to be compiled with the help of generative artificial intelligence, self-published and bolstered by sham reviews, that have proliferated in recent months on Amazon. The books are the result of a swirling mix of modern tools: A.I. apps that can produce text and fake portraits; websites with a seemingly endless array of stock photos and graphics; self-publishing platforms — like Amazon’s Kindle Direct Publishing — with few guardrails against the use of A.I.; and the ability to solicit, purchase and post phony online reviews, which runs counter to Amazon’s policies and may soon face increased regulation from the Federal Trade Commission. The use of these tools in tandem has allowed the books to rise near the top of Amazon search results and sometimes garner Amazon endorsements such as “#1 Travel Guide on Alaska.” A recent Amazon search for the phrase “Paris Travel Guide 2023,” for example, yielded dozens of guides with that exact title. One, whose author is listed as Stuart Hartley, boasts, ungrammatically, that it is “Everything you Need to Know Before Plan a Trip to Paris.” The book itself has no further information about the author or publisher. It also has no photographs or maps, though many of its competitors have art and photography easily traceable to stock-photo sites. More than 10 other guidebooks attributed to Stuart Hartley have appeared on Amazon in recent months that rely on the same cookie-cutter design and use similar promotional language. The Times also found similar books on a much broader range of topics, including cooking, programming, gardening, business, crafts, medicine, religion and mathematics, as well as self-help books and novels, among many other categories. Amazon declined to answer a series of detailed questions about the books. In a statement provided by email, Lindsay Hamilton, a spokeswoman for the company, said that Amazon is constantly evaluating emerging technologies. “All publishers in the store must adhere to our content guidelines,” she wrote. “We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to these guidelines.” The Times ran 35 passages from the Mike Steves book through an artificial intelligence detector from Originality.ai. The detector works by analyzing millions of records known to be created by A.I. and millions created by humans, and learning to recognize the differences between the two, explained Jonathan Gillham, the company’s founder. The detector assigns a score of between 0 and 100, based on the percentage chance its machine-learning model believes the content was A.I.-generated. All 35 passages scored a perfect 100, meaning they were almost certainly produced by A.I. The company claims that the version of its detector used by The Times catches more than 99 percent of A.I. passages and mistakes human text for A.I. on just under 1.6 percent of tests. The Time...

Loading comments...