Congress Really Wants to Regulate A.I., But No One Seems to Know How - The New Yorker

1 year ago
50

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

Congress Really Wants to Regulate A.I., But No One Seems to Know How - The New Yorker

In February, 2019, OpenAI, a little-known artificial-intelligence company, announced that its large-language-model text generator, GPT-2, would not be released to the public “due to our concerns about malicious applications of the technology.” Among the dangers, the company stated, was a potential for misleading news articles, online impersonation, and automating the production of abusive or faked social-media content and of spam and phishing content. As a consequence, Open AI proposed that “governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems.” This week, four years after that warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law met to discuss “Oversight of A.I.: Rules for Artificial Intelligence.” As has been the case with other tech hearings on the Hill, this one came after a new technology with the capacity to fundamentally alter our social and political lives was already in circulation. Like many Americans, the lawmakers became concerned about the pitfalls of large-language-model artificial intelligence in March, when OpenAI released GPT-4, the latest and most polished iteration of its text generator. At the same time, the company added it to a chatbot it had launched in November that used GPT to answer questions in a conversational way, with a confidence that is not always warranted, because GPT has a tendency to make things up. Despite this precarity, within two months, ChatGPT became the fastest-growing consumer application in history, reaching a hundred million monthly users by the beginning of this year. It has more than a billion monthly page visits. OpenAI has also released DALL-E, an image generator that creates original pictures from a descriptive verbal prompt. Like GPT, DALL-E and other text-to-image tools have the potential to blur the line between reality and invention, a prospect that heightens our susceptibility to deception. Recently, the Republican Party released the first fully A.I.-generated attack ad; it shows what appears to be actual dystopian images from a Biden Administration’s second term. The Senate hearing featured three experts: Sam Altman, the C.E.O. of OpenAI; Christina Montgomery, the chief privacy-and-trust officer at I.B.M.; and Gary Marcus, a professor emeritus at New York University and an A.I. entrepreneur. But it was Altman who garnered the most attention. Here was the head of the company with the hottest product in tech—one that has the potential to upend how business is conducted, how students learn, how art is made, and how humans and machines interact—and what he told the senators was that “OpenAI believes that regulation of A.I. is essential.” He is eager, he wrote in his prepared testimony, “to help policymakers as they determine how to facilitate regulation that balances incentivizing safety, while ensuring that people are able to access the technology’s benefits.” Senator Dick Durbin, of Illinois, called the hearing “historic,” because he could not recall having executives come before lawmakers and “plead” with them to regulate their products—but this was not, in fact, the first time that a tech C.E.O. had sat in a congressional hearing room and called for more regulation. Most notably, in 2018, in the wake of the Cambridge Analytica scandal—when Facebook gave the Trump-aligned political-consultancy firm access to the personal information of nearly ninety million users, without their knowledge—the C.E.O. of Facebook, Mark Zuckerberg, told some of the same senators that he was open to more government oversight, a position he reiterated the next year, writing in the Washington Post , “I believe we need a more active role for governments and regulators.” (At the same time, Facebook was paying lobbyists millions of dollars a year to stave off government regulation.) Like Zuckerberg, Altman prefaced his appeal for more regulation with an explanation of the guardrails that his company already employs, such as training its models to reject certain kinds of “anti-social” queries—like one I posed to ChatGPT recently, when I asked it to write the code to 3-D-print a Glock. (It did, howeve...

Loading 1 comment...