Artifical Intelligence Meets Cancel Culture - A Look Into Our Potential Future

3 years ago

In short, don't give the kill decision to an antonymous system; it's of particular importance that you not permit the programming of that system be left in the hands of those who have no regard for life. This system, once set into motion, could spell the end of humanity as we know it.

I'll say that again:

Once people turn the control of what the military calls, "The Kill Decision" to an automated weapons platform, it will no longer be controlled by people. These arrogant people even call it SKYNET!

Please watch, share, embed and tell people that our "smartest" people are literally doing the most idiotic decision which may bring our world to an end. It's hard to believe that some of these people actually thought through the consequences of what they want to launch.

The technology to do this is already here, and there are those who want this operational now.

Geneva — At this week’s meeting of the U.N. Convention on Certain Conventional Weapons (CCW), Dr. Larry Lewis, director of the Center for Autonomy and Artificial Intelligence at CNA, warned that human intervention at the “trigger pull” will not eliminate the risks of autonomous weapons. The convention’s purpose is to, “ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” In recent years the subject of lethal autonomous weapons systems (LAWS) has been a major focus of the CCW.

Lewis led an event at the convention titled, “The Human-Machine Relationship: Lessons From Military Doctrine and Operations.” The event was organized by CNA and the University of Amsterdam and was attended by officials and diplomats, including the ambassador to the Netherlands. Lewis was joined by Merel Ekelhof, a Ph.D. researcher at the VU University of Amsterdam, and U.S. Air Force Lt. Col. Matt King.

Over the past few years, there has been a growing consensus within the CCW that human control over the targeting process is a solution to the risks posed by LAWS. Lewis argued that this approach is too narrow because humans are fallible.

To illustrate this, he guided his audience though a military incident in which humans made mistakes in the targeting process, resulting in civilian casualties. He discussed a 2003 incident in Uruzgan, Afghanistan. Military helicopters were ordered to strike a group of SUVs approaching a U.S. position, because a Predator drone crew believed they were an imminent threat. The drone’s crew failed to observe children in the vehicles, and the attack resulted in 23 civilian casualties.

While some groups have also discussed banning LAWS entirely, Lewis believes this would be a mistake. He suggests that those concerned about civilian casualties should modify their idea of “evil killer robots.” In fact, Lewis said, “You can actually create better humanitarian outcomes with AI.”

One area where AI could help with the targeting process is in limiting unforeseen consequences of military action. For example in 2017 a Coalition airstrike targeted a bridge crossing the Euphrates River. The strike caused no civilian causalities, but the bridge contained a main water pipeline, so destroying it cut off access to Raqqa’s water supply. Machine learning can improve and expedite pattern-of-life analysis that could prevent such unforeseen consequences.

Lewis, who has spent more than 20 years at CNA providing analysis to the military on such issues as civilian casualties and fratricide, recently published a report titled, “Redefining Human Control: Lessons from the Battlefield for Autonomous Weapons.” Both the report and press release are available on CNA’s website.

Loading comments...