Top AI researcher dismisses AI 'extinction' fears, challenges 'hero scientist' narrative - Vent...

1 year ago
60

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

Top AI researcher dismisses AI 'extinction' fears, challenges 'hero scientist' narrative - VentureBeat

June 1, 2023 10:02 AM Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More Kyunghyun Cho, a prominent AI researcher and an associate professor at New York University, has expressed frustration with the current discourse around AI risk. While luminaries like Geoffrey Hinton and Yoshua Bengio have recently warned of potential existential threats from the future development of artificial general intelligence (AGI) and called for regulation or a moratorium on research, Cho believes these “doomer” narratives are distracting from the real issues, both positive and negative, posed by today’s AI. In a recent interview with VentureBeat, Cho — who is highly regarded for his foundational work on neural machine translation, which helped lead to the development of the Transformer architecture that ChatGPT is based on — expressed disappointment about the lack of concrete proposals at the recent Senate hearings related to regulating AI’s current harms, as well as a lack of discussion on how to boost beneficial uses of AI. Though he respects researchers like Hinton and his former supervisor Bengio, Cho also warned against glorifying “hero scientists” or taking any one person’s warnings as gospel, and offered his concerns about the Effective Altruism movement that funds many AGI efforts. (Editor’s note: This interview has been edited for length and clarity.) VentureBeat: You recently expressed disappointment about the recent AI Senate hearings on Twitter. Could you elaborate on that and share your thoughts on the “Statement of AI Risk” signed by Geoffrey Hinton, Yoshua Bengio and others?  Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now Kyunghyun Cho: First of all, I think that there are just too many letters. Generally, I’ve never signed any of these petitions. I always tend to be a bit more careful when I sign my name on something. I don’t know why people are just signing their names so lightly.  As far as the Senate hearings, I read the entire transcript and I felt a bit sad. It’s very clear that nuclear weapons, climate change, potential rogue AI, of course they can be dangerous. But there are many other harms that are actually being made by AI, as well as immediate benefits that we see from AI, yet there was not a single potential proposal or discussion on what we can do about the immediate benefits as well as the immediate harms of AI. For example, I think Lindsey Graham pointed out the military use of AI. That is actually happening now. But Sam Altman couldn’t even give a single proposal on how the immediate military use of AI should be regulated. At the same time, AI has a potential to optimize healthcare so that we can implement a better, more equitable healthcare system, but none of that was actually discussed. I’m disappointed by a lot of this discussion about existential risk; now they even call it literal “extinction.” It’s sucking the air out of the room. VB: Why do you think that is? Why is the “existential risk” discussion sucking the air out of the room to the detriment of more immediate harms and benefits?  Kyunghyun Cho: In a sense, it is a great story. That this AGI system that we create turns out to be as good as we are, or better than us. That is precisely the fascination that humanity has always had from the very beginning. The Trojan horse [that appears harmless but is malicious] — that’s a similar story, right? It’s about exaggerating aspects that are different from us but are smart like us.  In my view, it’s good that the general public is fascinated and excited by the scientific advances that we’re making. The unfortunate thing is that the scientists as well as the policymakers, the people who are making decisions or creating these advances, are only being either positively or negatively excited by such advances, not being critical about it. Our job as scientists, and also the policymakers, is to be critical about many of these apparent advances that may have both positive as well as negative impacts on society. But at the moment, AGI is kind of a magic wand th...

Loading comments...