Is the AI Takeover Imminent?

10 months ago
2.33K

Last week, a U.S. Air Force colonel addressed a military technology summit in the U.K. and said that during a training exercise, an AI drone turned on its operator and “killed” him in the simulation. When the remote operator told the drone not to attack a designated target, it disobeyed and actually attempted to attack the control tower.
Again, this was not even a real operation – just a simulation. But the comments caused quite a stir about the potential dangers of AI-operated weapons and military tech, and now the Air Force is trying to play cleanup.
As reported by Fox News:
“The U.S. Air Force on Friday is pushing back on comments an official made last week in which he claimed that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, saying the remarks “were taken out of context and were meant to be anecdotal.”
U.S. Air Force Colonel Tucker ‘Cinco’ Hamilton made the comments during the Future Combat Air & Space Capabilities Summit in London hosted by the Royal Aeronautical Society, which brought together about 70 speakers and more than 200 delegates from around the world representing the media and those who specialize in the armed services industry and academia.
‘The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,’ Air Force Spokesperson Ann Stefanek told Fox News. ’It appears the colonel’s comments were taken out of context and were meant to be anecdotal.’”
So let’s see, scientists create artificial intelligence, the military seeks to weaponize it, and it turns on its operators, threatening their safety. There’s something very familiar about all this. Oh, right, we’ve seen it in about a thousand science fiction movies, from The Terminator to 2001: A Space Odyssey to WarGames, where it rarely ends well for the humans.
Now is that a big leap from this simulation to Arnold Schwarzenegger’s Judgment Day? Probably. Hopefully. But it does raise the question of whether we can trust the AI defenders, including the Air Force, when they try to downplay what happened. Imagine if it had been a live operation with a fully armed drone that decided to tell its operator, “You know what? I don’t have to do what you say. And, in fact, I’m going to silence you.”
This situation further illustrates that there are still many unanswered questions and concerns when it comes to AI, and there needs to be some form of regulatory oversight before we give computers full control of deadly weapons.
Even experts like Elon Musk – who stand to profit from the proliferation of AI – are telling the world we all need to pump the brakes and put some rules in place.
And take it out of the military sphere for a moment. There seems to be a real risk of AI taking away jobs from humans. Corporations love the idea, because AI doesn’t need a salary. It doesn’t take sick days or have children to pick up. While it can be an extremely beneficial aid in a thousand different industries, if AI makes it impossible for people to feed their families, it is a bad thing and must be regulated.
This Air Force mishap is just a clear indicator that we have a long way to go before we can truly relax about AI – if we ever can at all.
Today’s full Sekulow broadcast includes more analysis of this alarming story. We’re also joined by ACLJ Senior Advisor and former U.S. Acting Director of National Intelligence Ric Grenell.

Loading 5 comments...