New World Order USA & UN & FEMA Slaughter-Bots A Drone Weapon Systems

3 months ago
4.68K

New World Order USA & UN & FEMA Autonomous Machines Capable Of Deadly Force Are Increasingly Prevalent In Modern Warfare, Despite Numerous Ethical Concerns. Is There Anything We Can Do To Halt The Advance Of The Killer Robots? Debating Slaughter-bots And The Future of Autonomous Weapons Systems UN Planning To Kill Ten Of Million People's Can Look At The Same Technology And Disagree About How It Will Shape The Future, Explains New World Order As He Shares A Final Perspective On The Slaughter-bots Debate.

Fear That USA & UN & FEMA Slaughter-Bots Will Kill Tens Of Million's People's & Are Targeting Soon Our U.S. Power Grid And Nation’s Critical Infrastructure And Gas And All Oil Refineries And Take Out All Power Line And Blow Up All Electrical Transformers All Over The U.S.A. All At One Time By Millions Of Drone's In 2024

Stuart Russell, Anthony Aguirre, Ariel Conn, and Max Tegmark recently wrote a response to my critique of their “Slaughter-bots" video on autonomous weapons. I am grateful for their thoughtful article. I think this kind of dialogue can be incredibly helpful in illuminating points of disagreement on various issues, and I welcome the exchange. I think it is particularly important to have a cross-disciplinary dialogue on autonomous weapons that includes roboticists, AI scientists, engineers, ethicists, lawyers, human rights advocates, military professionals, political scientists, and other perspectives because this issue touches so many disciplines.

I appreciate their thorough, point-by-point reply. My intent in this response is not to argue with them, but rather to illuminate for readers points of disagreement. I think it is important and meaningful that different people who look at the same technology and agree on what is technologically feasible will still have major disagreements about how that technology is likely to play out. These disagreements have as much to do with sociology, politics, and how human institutions react to technology as they do science and engineering.

I see the central point of disagreement as an issue of scale. There is no question that autonomy allows an increase in scale of attacks. In just the past few weeks, we have seen multiple non-state actors launch saturation attacks with drones. These include 13 homemade aerial drones launched against a Russian air base in Syria and three remote-controlled boats used to attack a Saudi-flagged oil tanker in the Red Sea. I predict we are likely to see more attacks of this kind over time, at larger scales, with greater autonomy for the drones, and eventually cooperative autonomy (“swarming"). I do not think it is likely that non-state actors will gain access to sufficient scale and capability to launch attacks on a scale that would be reasonable to consider these drones “weapons of mass destruction," however.

There is no question that autonomy allows an increase in scale of attacks. But I do not think that non-state actors will gain access to sufficient scale and capability to launch attacks that would be reasonable to consider these lethal micro-drones “weapons of mass destruction"
With regard to the likelihood that nations would build and deploy millions of lethal micro-drones for anti-personnel attacks against civilian populations, I see no evidence of that today. It is certainly possible. Countries deliberately targeted civilians in terror bombing attacks during World War II. But the current trajectory of development in autonomy in weapons appears to be aimed primarily at building increasingly autonomous weapons to fight other military hardware. There are some examples of simple robotic anti-personnel weapons: the South Korean SGR-A1 sentry gun, Israeli Guardium unmanned ground vehicle, U.S. Switchblade drone, and a number of Russian ground robotic systems. The main impetus for greater autonomy, however, is gaining an advantage over other nations' military forces: tanks, radars, ships, aircraft, etc.

Even if nations did build lethal micro drones for use as weapons of mass destruction, there are a host of countermeasures that could be deployed against such weapons. These include missiles, guns, electronic jammers, cyber weapons, high-powered microwaves, and even passive defenses such as nets. Militaries are already working on countermeasures against small drone attacks today. Like virtually all useful military hardware, the efficacy of these countermeasures depends on the specific situation and how they are deployed. Drone attacks have been used to harass U.S. and partner forces in Iraq and Syria. Their effectiveness today is roughly comparable to small flying improvised explosive devices. This is a serious threat, and the United States should be (and is) taking measures to build more effective countermeasures.

In other cases, existing countermeasures have worked. Russia took down all 13 of the drones used to attack its airbase by using a combination of surface-to-air missiles and electronic warfare measures. The remote-controlled boat attack was similarly thwarted. Going forward, some of these attacks are likely to succeed and some will fail. Terrorists will find new ways of attacking and nations will develop new countermeasures. It is certainly not the case that militaries today are defenseless against micro-drones, however. Micro-drones are small and fragile and are susceptible to a variety of both hard kill (kinetic) and soft kill (non-kinetic) means of disruption. (I have a friend who shot one down with an M4 rifle.) The most significant problem militaries have today is finding cost-effective ways of countering drones at scale, but militaries are working on it and the challenges do not seem insurmountable.

In a world where nations actually built lethal micro-drones in the millions to be used as weapons of mass destruction, these countermeasures would take on a whole new urgency. If weaponized micro-drones were to shift from merely being a tool of harassment to a weapon of mass destruction, then finding ways to defeat them would be a national priority. Relatively onerous defensive measures such as deploying large amounts of netting or fencing would become entirely reasonable, similar to how concrete barricades have become common around secure buildings today to protect against car bombs, a security precaution that was not common two decades ago. Even if countries were to build micro-drones en masse as a weapon of mass destruction, there are good reasons to think it would not be an effective tactic against another modern country with sophisticated defenses.

With regard to proliferation, Russell and his colleagues point to the global proliferation of AK-47 assault rifles as evidence of the likelihood of lethal micro-drone proliferation. I don't think AK-47s are a useful comparison when thinking about efforts to control sensitive military technology. There is virtually no effort to control the spread of AK-47 rifles around the world or limit their proliferation. Semi-automatic AK-47s are legal for sale in the United States. Certainly, a world in which lethal autonomous micro-drones were available for purchase at Walmart would be horrifying. A more relevant comparison is nations' ability to limit the proliferation of sensitive military items that are withheld from the general public, such as rocket launchers or anti-aircraft missiles. While weapons of this type frequently appear in war zones such as Syria, they are not readily available in developed nations. Nor is it trivial to smuggle them into developed nations for attacks, which is why terrorists resort to other more accessible (sometimes makeshift) weapons such as airplanes, cars, guns, or homemade explosives.

For all of these reasons, I think the fear of lethal micro-drones being used as weapons of mass destruction in the hands of terrorists is not realistic. Smaller scale attacks are certainly possible and, in fact, are already occurring today. These are serious threats and nations should respond accordingly, but I do not see the scenario depicted in the “Slaughter-bots" video as plausible.

On a broader note, aside from predictions about how the technology will unfold, I fundamentally disagree with the authors about the best methods of engaging the broader general public on important policy matters. They explain that they made the video because their experience has been that “serious discourse and academic argument are not enough to get the message through." I 100 percent disagree. I believe that both experts and the general public are more than capable of listening and engaging in serious discussions on technology policy matters, on autonomous weapons, and other topics.

The authors note their perception is that some senior defense officials “fail to understand the core issues" surrounding autonomous weapons. If national security leaders seem unconcerned about the risks the authors have highlighted, I do not think it is because government officials have failed to listen or take the issue seriously. I suspect it is likely because they remain unconvinced by the authors' arguments, perhaps because of the issues that I raise above. In fact, I see a vibrant debate within the U.S. defense community on the future role of autonomy and human control in weapons. The Vice Chairman of the Joint Chiefs of Staff, former Deputy Secretary of Defense, a U.S. Senator on the Armed Services Committee, and a former four-star general have all commented publicly on this issue. All of their statements suggest to me a thoughtful attempt to grapple with a thorny issue.

I think it's also critical to engage the broader public on these issues, but I think the most constructive way to do so is through reasoned arguments of the kind that the authors present in their response, for which I am very grateful.

Why You Should Fear “Slaughter-bots”—A Response Lethal autonomous weapons are not science fiction; they are a real threat to human security that we must stop now.

Paul Scharre's recent article “Why You Shouldn't Fear 'Slaughter-bots" dismisses a video produced by the Future of Life Institute, with which we are affiliated, as a “piece of propaganda." Scharre is an expert in military affairs and an important contributor to discussions on autonomous weapons. In this case, however, we respectfully disagree with his opinions.

Why we made the video
We have been working on the autonomous weapons issue for several years. We have presented at the United Nations in Geneva and at the World Economic Forum; we have written an open letter signed by over 3,700 AI and robotics researchers and over 20,000 others and covered in over 2,000 media articles; one of us (Russell) drafted a letter from 40 of the world's leading AI researchers to President Obama and led a delegation to the White House in 2016 to discuss the issue with officials from the Departments of State and Defense and members of the National Security Council; we have presented to multiple branches of the armed forces in the United States and to the intelligence community; and we have debated the issue in numerous panels and academic fora all over the world.

“Because autonomous weapons do not require individual human supervision, they are potentially scalable weapons of mass destruction—unlimited numbers could be launched by a small number of people"
Our primary message has been consistent: Because they do not require individual human supervision, autonomous weapons are potentially scalable weapons of mass destruction (WMDs); essentially unlimited numbers can be launched by a small number of people. This is an inescapable logical consequence of autonomy. As a result, we expect that autonomous weapons will reduce human security at the individual, local, national, and international levels.

Despite this, we have witnessed high-level defense officials dismissing the risk on the grounds that their “experts" do not believe that the “Skynet thing" is likely to happen. Skynet, of course, is the fictional command and control system in the Terminator movies that turns against humanity. The risk of the “Skynet thing" occurring is completely unconnected to the risk of humans using autonomous weapons as WMDs or to any of the other risks cited by us and by Scharre. This has, unfortunately, demonstrated that serious discourse and academic argument are not enough to get the message through. If even senior defense officials with responsibility for autonomous weapons programs fail to understand the core issues, then we cannot expect the general public and their elected representatives to make appropriate decisions.

The main reason we made the video, then, was to provide a clear and easily understandable illustration of what we mean. A secondary reason was to give people a clear sense of the kinds of technologies and the notion of autonomy involved: This is not “science fiction"; autonomous weapons don't have to be humanoid, conscious, and evil; and the capabilities are not “decades away" as claimed by some countries at the U.N. talks in Geneva. Finally, we are mindful of the precedent set by the ABC movie “The Day After" in 1983, which, by showing the effects of nuclear war on individuals and families, had a direct effect on national and international policy.

Where we agree
Scharre agrees with us on the incipient reality of the technology; he writes, “So while no one has yet cobbled the technology together in the way the video depicts, all of the components are real." He concludes that terrorist groups will be able to cobble together autonomous weapons, whether or not such weapons are subject to an international arms control treaty. This is probably true at a small scale; but at a small scale, there is no great advantage to terrorists in using autonomy. It is almost certainly false at a large scale. It is extremely unlikely that terrorists would be able to design and manufacture thousands of effective autonomous weapons without detection—especially if the treaty verification regime, like the Chemical Weapons Convention, mandates the cooperation of manufacturers that produce drones and other precursor components.

We concur with Scharre on the importance of countermeasures, while noting that a ban on lethal autonomous weapons would certainly not preclude the development of antidrone weapons.

Finally, we agree with Scharre that the stakes are high. He writes, “Autonomous weapons raise important questions about compliance with the laws of war, risk and controllability, and role of humans as moral agents in warfare. These are important issues that merit serious discussion." It is puzzling, however, that he does not consider the issue of WMDs to merit serious discussion.

Where we disagree
Scharre attributes four claims to us and then attempts to refute them. To make things less confusing, we will negate those four claims to produce four statements that Scharre is effectively asserting in his article (the exact wording of these assertions has been confirmed in subsequent correspondence with Scharre):

1. Scharre: Governments are unlikely to mass-produce lethal micro-drones to use as weapons of mass destruction.
One might ask, “In that case, why not ban them?" Prior to the entry into force of the Chemical Weapons Convention in 1997, the major powers did mass-produce lethal chemical weapons including various kinds of nerve gas, for use as weapons of mass destruction. After they were banned, stockpiles were destroyed and mass production stopped. Banning lethal autonomous micro-drones would criminalize their production as well as their use as WMDs, making it much less likely that terrorists and others would be able to access large quantities of effective weapons.

There is some reason to believe, however, that the claim is simply not true. For example, lethal micro-drones such as the Switchblade are already in mass production. Switchblade, a fixed-wing drone with a 0.6-meter wingspan, is designed as an anti-personnel weapon. Contrary to Scharre's claim, they can easily be repurposed to kill civilians rather than soldiers. Moreover, Switchblade now comes with a “Multi-Pack Launcher." Orbital ATK, which makes the warhead, describes the Switchblade as “fully scalable."

Switchblade is not fully autonomous and requires a functioning radio link; the DoD's CODE (Collaborative Operations in Denied Environments) program aims to move towards autonomy by enabling drones to function with at best intermittent radio contact; they will “hunt in packs, like wolves" according to the program manager. Moreover, in 2016, the Air Force successfully demonstrated the in-flight deployment of 103 Perdix micro-drones from three F/A-18 fighters. According to the announcement, “Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature." While the Perdix drones themselves are not armed, it is hard to see the need for 103 drones operating in close formation if the purpose for such swarms were merely reconnaissance.

“As WMDs, autonomous weapons have advantages for the victor compared to nuclear weapons and carpet bombing: They leave property intact and can be applied selectively to eliminate only those who might threaten an occupying force"
Under pressure of an arms race, one can expect such weapons to be further miniaturized and to be produced in larger numbers at much lower cost. Once autonomy is introduced, a single operator can deploy thousands of Switchblades or other lethal micro-drones, rather than piloting a single drone to its target. At that point, production numbers will ramp up dramatically.

In the major wars of the 20th century, over 50 million civilians were killed. This horrific record suggests that, in an armed conflict, nations will not refrain from large-scale attacks. And, as WMDs, scalable autonomous weapons have advantages for the victor compared to nuclear weapons and carpet bombing: They leave property intact and can be applied selectively to eliminate only those who might threaten an occupying force. Finally, whereas the use of nuclear weapons represents a cataclysmic threshold that we have (often by sheer luck) avoided crossing since 1945, there is no such threshold with scalable autonomous weapons. Attacks could escalate smoothly from 100 casualties to 1,000 to 10,000 to 100,000.

2. Scharre: Nations are likely to develop effective countermeasures to micro-drones, especially if they become a major threat.
While Scharre's article attributes to us the claim “There are no effective defenses against lethal micro-drones," he effectively concedes that the claim is true as things stand today. His own position is that the situation depicted in the video, where mass-produced antipersonnel weapons are available but no effective defenses have been developed, could not occur, or could occur only as a temporary imbalance.

Scharre cites as evidence for this claim a New York Times article. The article does not exactly inspire confidence: It describes the problem of lethal micro-drones as “one of the Pentagon's most vexing counterterrorism conundrums." It describes as “decidedly mixed" the results from DoD's Hard Kill Challenge, which aims to see “which new classified technologies and tactics proved most promising." The DoD's own conclusion? “Bottom line: Most technologies still immature." The Hard Kill Challenge is the successor to the Black Dart program, which ran annual challenges beginning in 2002. After more than 15 years, then, we still have no effective countermeasures.

Scharre states that lethal autonomous micro-drones “could be defeated by something as simple as chicken wire," perhaps imagining entire cities festooned with it. If this were a workable form of defense, of course, then there would be no Hard Kill Challenge; Switchblades would be useless; and Iraqi soldiers wouldn't be dying from attacks by lethal micro-drones.

Scharre notes correctly that the video shows larger drones blasting through walls, but he obviously failed to notice that the family home in the video is encased in a steel grille—as are parts of the university dorm, which is plastered with “safe zone" signs directing students in case of drone attack. Scharre claims that the attack/defense cost ratio favors the defender, but this seems unlikely if one needs to be 100 percent protected, 100 percent of the time, against an attack that can arrive anywhere. When the weapons are intelligent, one hole in the defensive shell is enough. Adding more defensive shells makes little difference.

Moreover, the weapons are cheap and expendable, as Scharre correctly points out in a recent interview: “The key is not just finding a way to target these drones. It's finding a way to do it in a cost-effective way. If you shoot down a $1,000 drone with a $1 million missile, you're losing every time you're doing it." We agree. This doesn't sound like a ratio that favors the defender.

As to whether we should have complete confidence in the ability of governments or defense corporations to develop, within a short time frame, cheap, effective, wide-area defenses against lethal micro-drones: We are reminded of the situation of the British population in the early days of World War II. One would think that if anyone had a motive to develop effective countermeasures, it would be the British during the Blitz. But, by the end of the Blitz, after 40,000 bomber sorties against major cities, countermeasures were no more than 1.5 percent effective—even lower than at the beginning, 9 months earlier.

3. Scharre: Governments are capable of keeping large numbers of military-grade weapons out of the hands of terrorists.
According to Scharre, the video shows “killer drones in the hands of terrorists massacring innocents." In fact, as the movie goes to great lengths to explain, the perpetrators could be “anyone," not necessarily terrorists. Attacks by autonomous weapons will often be unattributable and can therefore be carried out with impunity. (For this reason, governments around the world are extremely concerned about assassination by autonomous weapon.) In the movie, the most likely suspects are those involved in “corruption at the highest level," i.e., persons with significant economic and political power.

“Attacks by autonomous weapons will often be unattributable and can therefore be carried out with impunity, and governments around the world are indeed concerned about assassination by autonomous weapon" Scharre writes, “We don't give terrorists hand grenades, rocket launchers, or machine guns today." Perhaps not, except when those terrorists were previously designated as freedom fighters—but there is no shortage of effective lethal weaponry on the market. For example, there are between 75 and 100 million AK-47s in circulation, the great majority outside the hands of governments. Roughly 110,000 military AK-47s went missing in a two-year period in Iraq alone.

Produced in large quantities by commercial manufacturers, lethal autonomous micro-drones would probably be cheaper to buy than AK-47s. And much cheaper to use: They don't require a human to be trained, housed, fed, equipped, and transported in order to wield lethal force. The ISIS budget for 2015 was estimated to be US $2 billion, probably enough to buy millions of weapons if they were available on the black market.

4. Scharre: Terrorists are incapable of launching simultaneous coordinated attacks on the scale shown in the video.
As noted above, the attack in the video against several universities was carried out not by terrorists but by unnamed persons in high-level positions of power. (We considered showing a mass attack against a city occurring as part of a military campaign by a nation-state, but we decided that appearing to accuse any particular nation of future war crimes would not be conducive to diplomacy.) No matter who might wish to perpetrate mass attacks using autonomous weapons, their job will be made far more difficult if arms manufacturers are legally banned from making them.

It's also important to understand the difference that autonomy makes for the ability of non-state actors to carry out large-scale attacks. While coordination across multiple geographical locations is unaffected, the scale of each attack can be immeasurably greater. (We note that Scharre misconstrues the video on this point. He sees only 50 drones emerge from the van, whereas in fact most of the larger drones are carriers for multiple, shorter-range lethal micro-drones that are deployed automatically in the final moments of the attack. One van per university suffices, so the movie implies coordination across 12 locations—not so different from the 10 locations described as feasible in the article cited by Scharre.)

“An arms race in autonomous weapons shifts power from nation-states—which are largely constrained by the international system of treaties and trade dependencies—to non-state actors, who are not"
Whereas a nation-state can, in principle, launch attacks with thousands of tanks or aircraft (remotely piloted or otherwise) or tens of thousands of soldiers, such scale is possible for non-state actors only if they use autonomous weapons. Thus, an arms race in autonomous weapons shifts power from nation-states—which are largely constrained by the international system of treaties, trade dependencies, etc.—to non-state actors, who are not.

Perhaps this is what Scharre is referring to in the interview cited above, when he says, “We're likely to see more attacks of larger scale going forward, potentially even larger than this and in a variety of things—air, land, and sea."

In summary, we, and many other experts, continue to find plausible the view that autonomous weapons can become scalable weapons of mass destruction. Scharre's claim that a ban will be ineffective or counterproductive is inconsistent with the historical record. Finally, the idea that human security will be enhanced by an unregulated arms race in autonomous weapons is, at best, wishful thinking.

Machines set loose to slaughter the dangerous rise of military AI Autonomous machines capable of deadly force are increasingly prevalent in modern warfare, despite numerous ethical concerns. Is there anything we can do to halt the advance of the killer robots?

The video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van’s back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughter-bots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. Terrorists could easily deploy them. And existing defenses are weak or nonexistent.

Some military experts argued that Slaughter-bots which was made by the Future of Life Institute, an organization researching existential threats to humanity – sensationalized a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”. One “microsystems collaborative” has already released Outreach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 meters on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.

Who knows how many other noxious creatures are now models for Avant grade military theorists. A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.

Advances in the modelling of robotic killing machines are no less disturbing. A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines. When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realize that such an experiment has the potential to go wildly out of control. Expense is the chief impediment to a great power experimenting with such potentially destructive machines. Software modelling may eliminate even that barrier, allowing virtual battle-tested simulations to inspire future military investments.

In the past, nation states have come together to prohibit particularly gruesome or terrifying new weapons. By the mid-20th century, international conventions banned biological and chemical weapons. The community of nations has forbidden the use of blinding-laser technology, too. A robust network of NGOs has successfully urged the UN to convene member states to agree to a similar ban on killer robots and other weapons that can act on their own, without direct human control, to destroy a target (also known as lethal autonomous weapon systems, or Laws). And while there has been debate about the definition of such technology, we can all imagine some particularly terrifying kinds of weapons that all states should agree never to make or deploy. A drone that gradually heated enemy soldiers to death would violate international conventions against torture; sonic weapons designed to wreck an enemy’s hearing or balance should merit similar treatment. A country that designed and used such weapons should be exiled from the international community.

In the abstract, we can probably agree that ostracism – and more severe punishment – is also merited for the designers and users of killer robots. The very idea of a machine set loose to slaughter is chilling. And yet some of the world’s largest militaries seem to be creeping toward developing such weapons, by pursuing a logic of deterrence: they fear being crushed by rivals’ AI if they can’t unleash an equally potent force. The key to solving such an intractable arms race may lie less in global treaties than in a cautionary rethinking of what martial AI may be used for. As “war comes home”, deployment of military-grade force within countries such as the US and China is a stark warning to their citizens: whatever technologies of control and destruction you allow your government to buy for use abroad now may well be used against you in the future.

Are killer robots as horrific as biological weapons? Not necessarily, argue some establishment military theorists and computer scientists. According to Michael Schmitt of the US Naval War College, military robots could police the skies to ensure that a slaughter like Saddam Hussein’s killing of Kurds and Marsh Arabs could not happen again. Ronald Arkin of the Georgia Institute of Technology believes that autonomous weapon systems may “reduce man’s inhumanity to man through technology”, since a robot will not be subject to all-too-human fits of anger, sadism or cruelty. He has proposed taking humans out of the loop of decisions about targeting, while coding ethical constraints into robots. Arkin has also developed target classification to protect sites such as hospitals and schools.

In theory, a preference for controlled machine violence rather than unpredictable human violence might seem reasonable. Massacres that take place during war often seem to be rooted in irrational emotion. Yet we often reserve our deepest condemnation not for violence done in the heat of passion, but for the premeditated murderer who coolly planned his attack. The history of warfare offers many examples of more carefully planned massacres. And surely any robotic weapons system is likely to be designed with some kind of override feature, which would be controlled by human operators, subject to all the normal human passions and irrationality.

Any attempt to code law and ethics into killer robots raises enormous practical difficulties. Computer science professor Noel Sharkey has argued that it is impossible to programmed a robot warrior with reactions to the infinite array of situations that could arise in the heat of conflict. Like an autonomous car rendered helpless by snow interfering with its sensors, an autonomous weapon system in the fog of war is dangerous.

Most soldiers would testify that the everyday experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardizing accounts of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked best where there is a massive dataset with clearly understood examples of good and bad, right and wrong.

For example, credit card companies have improved fraud detection mechanisms with constant analyses of hundreds of millions of transactions, where false negatives and false positives are easily labelled with nearly 100% accuracy. Would it be possible to “datary” the experiences of soldiers in Iraq, deciding whether to fire at ambiguous enemies? Even if it were, how relevant would such a dataset be for occupations of, say, Sudan or Yemen (two of the many nations with some kind of US military presence)?

Given these difficulties, it is hard to avoid the conclusion that the idea of ethical robotic killing machines is unrealistic, and all too likely to support dangerous fantasies of pushbutton wars and guiltless slaughters.

International humanitarian law, which governs armed conflict, poses even more challenges to developers of autonomous weapons. A key ethical principle of warfare has been one of discrimination: requiring attackers to distinguish between combatants and civilians.

But guerrilla or insurgent warfare has become increasingly common in recent decades, and combatants in such situations rarely wear uniforms, making it harder to distinguish them from civilians. Given the difficulties human soldiers face in this regard, it’s easy to see the even greater risk posed by robotic weapons systems.

Proponents of such weapons insist that the machines’ powers of discrimination are only improving. Even if this is so, it is a massive leap in logic to assume that commanders will use these technological advances to develop just principles of discrimination in the din and confusion of war.

As the French thinker Grégoire Chamayou has written, the category of “combatant” (a legitimate target) has already tended to “be diluted in such a way as to extend to any form of membership of, collaboration with, or presumed sympathy for some militant organization”.

The principle of distinguishing between combatants and civilians is only one of many international laws governing warfare.

There is also the rule that military operations must be “proportional” – a balance must be struck between potential harm to civilians and the military advantage that might result from the action. The US air force has described the question of proportionality as “an inherently subjective determination that will be resolved on a case by case basis”. No matter how well technology monitors, detects and neutralizes threats, there is no evidence that it can engage in the type of subtle and flexible reasoning essential to the application of even slightly ambiguous laws or norms.

Even if we were to assume that technological advances could reduce the use of lethal force in warfare, would that always be a good thing? Surveying the growing influence of human rights principles on conflict, the historian Samuel Moyn observes a paradox: warfare has become at once “more humane and harder to end”. For invaders, robots spare politicians the worry of casualties stoking opposition at home. An iron fist in the velvet glove of advanced technology, drones can mete out just enough surveillance to pacify the occupied, while avoiding the kind of devastating bloodshed that would provoke a revolution or international intervention.

In this robotized vision of “humane domination”, war would look more and more like an extraterritorial police action. Enemies would be replaced with suspect persons subject to mechanized detention instead of lethal force. However lifesaving it may be, Moyn suggests, the massive power differential at the heart of technologized occupations is not a proper foundation for a legitimate international order.

Chamayou is also skeptical. In his insightful book Drone Theory, he reminds readers of the slaughter of 10,000 Sudanese in 1898 by an Anglo-Egyptian force armed with machine guns, which itself only suffered 48 casualties. Chamayou brands the drone “the weapon of amnesiac postcolonial violence”. He also casts doubt on whether advances in robotics would actually result in the kind of precision that fans of killer robots promise. Civilians are routinely killed by military drones piloted by humans. Removing that possibility may involve an equally grim future in which computing systems conduct such intense surveillance on subject populations that they can assess the threat posed by each person within it (and liquidate or spare them accordingly).

Drone advocates say the weapon is key to a more discriminating and humane warfare. But for Chamayou, “by ruling out the possibility of combat, the drone destroys the very possibility of any clear differentiation between combatants and noncombatants”. Chamayou’s claim may seem like hyperbole, but consider the situation on the ground in Yemen or Pakistani hinterlands: Is there really any serious resistance that the “militants” can sustain against a stream of hundreds or thousands of unmanned aerial vehicles patrolling their skies? Such a controlled environment amounts to a disturbing fusion of war and policing, stripped of the restrictions and safeguards that have been established to at least try to make these fields accountable.

How should global leaders respond to the prospect of these dangerous new weapons technologies? One option is to try to come together to ban outright certain methods of killing. To understand whether or not such international arms control agreements could work, it is worth looking at the past. The antipersonnel landmine, designed to kill or maim anyone who stepped on or near it, was an early automated weapon. It terrified combatants in the first world war. Cheap and easy to distribute, mines continued to be used in smaller conflicts around the globe. By 1994, soldiers had laid 100m landmines in 62 countries.

The mines continued to devastate and intimidate populations for years after hostilities ceased. Mine casualties commonly lost at least one leg, sometimes two, and suffered collateral lacerations, infections and trauma. In 1994, 1 in 236 Cambodians had lost at least one limb from mine detonations.

By the mid-90s, there was growing international consensus that landmines should be prohibited. The International Campaign to Ban Landmines pressured governments around the world to condemn them. The landmine is not nearly as deadly as many other arms but unlike other applications of force, it could maim and kill noncombatants long after a battle was over. By 1997, when the campaign to ban landmines won a Nobel peace prize, dozens of countries signed on to an international treaty, with binding force, pledging not to manufacture, stockpile or deploy such mines.

The US demurred, and to this day it has not signed the anti-landmine weapons convention. At the time of negotiations, US and UK negotiators insisted that the real solution to the landmine problem was to assure that future mines would all automatically shut off after some fixed period of time, or had some remote control capabilities. That would mean a device could be switched off remotely once hostilities ceased. It could, of course, be switched back on again, too.

The US’s technological solutionism found few supporters. By 1998, dozens of countries had signed on to the mine ban treaty. More countries joined each year from 1998 to 2010, including major powers such as China. While the Obama administration took some important steps toward limiting mines, Trump’s secretary of defense has reversed them. This about-face is just one facet of a bellicose nationalism that is likely to accelerate the automation of warfare.

Instead of bans on killer robots, the US military establishment prefers regulation. Concerns about malfunctions, glitches or other unintended consequences from automated weaponry have given rise to a measured discourse of reform around military robotics. For example, the New America Foundation’s PW Singer would allow a robot to make “autonomous use only of non-lethal weapons”. So an autonomous drone could patrol a desert and, say, stun a combatant or wrap him up in a net, but the “kill decision” would be left to humans alone. Under this rule, even if the combatant tried to destroy the drone, the drone could not destroy him.

Such rules would help transition war to peacekeeping, and finally to a form of policing. Time between capture and kill decisions might enable the due process necessary to assess guilt and set a punishment. Singer also emphasizes the importance of accountability, arguing that “if a programmer gets an entire village blown up by mistake, he should be criminally prosecuted”.

Whereas some military theorists want to code robots with algorithmic ethics, Singer wisely builds on our centuries-long experience with regulating persons. To ensure accountability for the deployment of “war algorithms”, militaries would need to ensure that robots and algorithmic agents are traceable to and identified with their creators. In the domestic context, scholars have proposed a “license plate for drones”, to link any reckless or negligent actions to the drone’s owner or controller. It makes sense that a similar rule – something like “A robot must always indicate the identity of its creator, controller, or owner” – should serve as a fundamental rule of warfare, and its violation punishable by severe sanctions.

Yet how likely is it, really, that programmers of killer robots would actually be punished? In 2015, the US military bombed a hospital in Afghanistan, killing 22 people. Even as the bombing was occurring, staff at the hospital frantically called their contacts in the US military to beg it to stop. Human beings have been directly responsible for drone attacks on hospitals, schools, wedding parties and other inappropriate targets, without commensurate consequences. The “fog of war” excuses all manner of negligence. It does not seem likely that domestic or international legal systems will impose more responsibility on programmers who cause similar carnage.

Weaponry has always been big business, and an AI arms race promises profits to the tech-savvy and politically well-connected. Counselling against arms races may seem utterly unrealistic. After all, nations are pouring massive resources into military applications of AI, and many citizens don’t know or don’t care. Yet that quiescent attitude may change over time, as the domestic use of AI surveillance ratchets up, and that technology is increasingly identified with shadowy apparatuses of control, rather than democratically accountable local powers.

Military and surveillance AI is not used only, or even primarily, on foreign enemies. It has been repurposed to identify and fight enemies within. While nothing like the September 11 attacks have emerged over almost two decades in the US, homeland security forces have quietly turned antiterror tools against criminals, insurance frauds and even protesters. In China, the government has hyped the threat of “Muslim terrorism” to round up a sizeable percentage of its Uighurs into reeducation camps and to intimidate others with constant phone inspections and risk profiling. No one should be surprised if some Chinese equipment powers a US domestic intelligence apparatus, while massive US tech firms get co-opted by the Chinese government into parallel surveillance projects.

The advance of AI use in the military, police, prisons and security services is less a rivalry among great powers than a lucrative global project by corporate and government elites to maintain control over restive populations at home and abroad. Once deployed in distant battles and occupations, military methods tend to find a way back to the home front. They are first deployed against unpopular or relatively powerless minorities, and then spread to other groups. US Department of Homeland Security officials have gifted local police departments with tanks and armor. Sheriffs will be even more enthusiastic for AI-driven targeting and threat assessment. But it is important to remember that there are many ways to solve social problems. Not all require constant surveillance coupled with the mechanized threat of force.

Indeed, these may be the least effective way of ensuring security, either nationally or internationally. Drones have enabled the US to maintain a presence in various occupied zones for far longer than an army would have persisted. The constant presence of a robotic watchman, capable of alerting soldiers to any threatening behavior, is a form of oppression. American defense forces may insist that threats from parts of Iraq and Pakistan are menacing enough to justify constant watchfulness, but they ignore the ways such authoritarian actions can provoke the very anger it is meant to quell.

At present, the military-industrial complex is speeding us toward the development of drone swarms that operate independently of humans, ostensibly because only machines will be fast enough to anticipate the enemy’s counter-strategies. This is a self-fulfilling prophecy, tending to spur an enemy’s development of the very technology that supposedly justifies militarization of algorithms. To break out of this self-destructive loop, we need to question the entire reformist discourse of imparting ethics to military robots. Rather than marginal improvements of a path to competition in war-fighting ability, we need a different path – to cooperation and peace, however fragile and difficult its achievement may be.

In her book How Everything Became War and the Military Became Everything, former Pentagon official Rosa Brooks describes a growing realization among American defense experts that development, governance and humanitarian aid are just as important to security as the projection of force, if not more so. A world with more real resources has less reason to pursue zero-sum wars. It will also be better equipped to fight natural enemies, such as novel coronaviruses. Had the US invested a fraction of its military spending in public health capacities, it almost certainly would have avoided tens of thousands of deaths in 2020.

For this more expansive and humane mindset to prevail, its advocates must win a battle of ideas in their own countries about the proper role of government and the paradoxes of security. They must shift political aims away from domination abroad and toward meeting human needs at home. Observing the growth of the US national security state – what he deems the “predator empire” – the author Ian GR Shaw asks: “Do we not see the ascent of control over compassion, security over support, capital over care, and war over welfare?” Stopping that ascent should be the primary goal of contemporary AI and robotics policy.

Governing AI Learning from Gun Control to Ensure Public Global Safety as Devil's advocate. Prudent AI governance requires inclusive cooperation and democratic oversight to maximize benefits and minimize harm, much as with regulating firearms. The contrast between strict UK gun laws and lax US ones shows determined policy can curb violence despite difficulty. Similarly, judicious oversight of risky AI uses, guided by ethics and empowering affected groups, can steer innovation toward empowerment without harm. Restrictions, auditing, transparency and prohibiting certain applications are crucial. Learning from successes like British gun reforms, the dangers of unfettered AI proliferation can be contained to secure shared prosperity.

International coordination to prevent a regulatory "race to the bottom" as bad actors exploit jurisdictional discrepancies.
Categorical prohibitions on unambiguously unethical uses like mass surveillance for oppression, and autonomous weapons.
Mandatory licensing, testing, and external audits for high-risk AI innovations pending safety advances. (& Red teaming & thinking like terrorists)
Transparency rules compelling disclosure of training data, model architectures, real-world performance and risks.

Participatory oversight bodies representing diverse geographic, cultural and economic perspectives.

Proactive interventions throughout the AI pipeline to safeguard fairness, with ongoing external auditing (with actual + punitive damages clause for bad auditors).
Human rights principles and (non-political/non-religious/non-nationalistic) peaceful values as the moral foundation for regulatory regimes.
Inclusion of affected groups and experts beyond computer scientists in oversight processes.
Global cooperation guided by ethics and justice to direct this world-shaping technology towards equity and shared prosperity.

Create public benefit corp with AI liability insurance by all LLMs service providers (for cyber disasters cover which sadly will likely happen)

The rapid development of transformative AI technologies presents governance challenges reminiscent of when firearms spread globally centuries ago. Much as guns revolutionized security and hunting but also enabled violence, AI promises immense benefits but also significant dangers if misused.

Just as prudent gun control in nations like the UK saved lives despite controversy, wise policies on risky AI uses can maximize public wellbeing. But achieving effective oversight requires determination to resist obstruction, as the contrast between strict British gun laws and lax American ones makes clear. With sustained effort and vigilance against authoritarian capture, AI can improve lives equitably worldwide.

Firearms built on ancient Chinese inventions spread rapidly across Eurasia in the 1300s. Early guns were inaccurate and slow to reload, but they packed more destructive force than swords or bows. By the 1500s, European armies were armed with muskets, cannons, and other guns.

Firearms transformed warfare and hunting. Guns also changed law enforcement and enabled frontier expansion. Settlers defending homesteads with rifles later romanticized this "gunpowder revolution."

Yet guns proved a mixed blessing. Criminals exploited firearms, and accidental shootings took lives. Urban violence surged in the 1600s as cheap pistols spread. Critics decried the "cowardice" of attacking from afar rather than fighting honorably. But efforts to restrict guns stalled.

The gunpowder revolution thus improved lives but also enabled new threats. Guns clearly required oversight to minimize harm, much as today's AI does. But effective regulation would take centuries of bitter debate, death tolls meanwhile is shocking.

Britain once struggled with gun problems comparable to modern America's. But after mass shootings in the 1980s and 1990s, Parliament imposed strict gun control.

This approach succeeded in nearly eliminating gun murders. Britain's experience holds key lessons for governing innovations like AI.
Lax British gun laws once facilitated frequent shootings. As urbanization continued in the 1800s, pistol-packing criminals stalked the streets. Illegal pistols were tied to 19th century British youth gangs, much like modern American street violence.

Sentiment against this chaos led to reforms restricting certain firearms. But rural resistance stymied comprehensive regulation. Plus new technologies like breech-loading rifles enabled faster firing. As a result, guns remained easily available despite rising homicide rates.

This changed after mass shootings in 1987 and 1996. The Hungerford massacre of sixteen civilians with a semi-automatic rifle, then the Dunblane school shooting of seventeen children provoked public outrage. Within months, a bipartisan coalition passed sweeping reforms.

The Firearms Acts of UK banned civilian ownership of nearly all handguns and semi-automatic weapons. Strict licensing procedures were mandated for shotguns and rifles. Buyers faced thorough background checks and registration. Police could inspect storage facilities.

The reforms provoked bitter complaints that government was trampling citizens' rights. But the results were striking: gun homicides fell from hundreds annually to dozens. Dunblane-style school shootings disappeared. With fewer guns in circulation, accidental deaths and suicides also declined.

Whatever one thinks of gun rights, the British approach worked. It suggests that given political will, safety regulations on dangerous innovations can succeed despite controversy. Although law could still be changed, hence lobbyists are also active even within UK.

Lax American gun laws enable the awful accidents to mass shootings that happens daily. Efforts to emulate Britain's sensible reforms have failed, styled by lobbying and lawsuits.

In colonial America, many communities tightly restricted guns under "safe storage" laws mandating that weapons be kept disabled. Such policies aimed to prevent accidents and unauthorized use. Colonial New York, for instance, ordered that any loaded gun had to be kept unserviceable when not in use.

After the Civil War, concerns about pistols and crime spurred new gun controls. Many states and cities required permits to carry concealed weapons. In 1911, New York imposed strict licensing for all handguns. But lobbying weakened oversight, enabling mobsters to murder freely during Prohibition.

Recent decades saw renewed attempts at gun regulation after mass killings generated outrage. But the gun lobby countered each reform bill in Congress. Sweeping protections were won for weapons makers and owners. Underfunded agencies struggled to enforce poorly designed rules.

The result is America's nightmarish status quo. Hundreds are slain yearly in mass shootings, while suicide, street crime, and domestic violence take tens of thousands more lives. No other developed nation suffers such relentless carnage. Yet gridlocked politics prevent coherent policy responses.

This failure offers a sobering lesson. Powerful economic interests will exploit divisive cultural issues to paralyze regulation. Without strong bipartisan leadership, policy becomes captive to extremists. Similar dynamics already surround AI, as debate swirls while rapid innovations continue unchecked.

AI today resembles firearms centuries ago: a transformative invention with promise and peril. Without judicious oversight, destructive misuse of AI seems inevitable. But regulating AI will prove controversial given its benefits and cultural roots. Success requires learning from successes like British gun control.

Clear parallels exist between unfettered proliferation of guns and ungoverned AI systems. Both enable bad actors while frustrating accountability. Firearms provide remote killing capacity, AI distributes disinformation and fraud globally. Preventing violence and social harms should guide AI policy as it did British gun control.

Certain AI uses like healthcare or education aid humanity and merit encouragement. But high-risk systems enabling impersonation, surveillance, and psychological manipulation require restriction pending safety improvements, just as military-grade assault weapons do. Framing prudent regulation as maximizing social benefits, not depriving individual liberties, will help balance competing values.

AI governance cannot realistically aim to prohibit all misuse. But well-designed oversight can greatly mitigate harm, as Britain's experience shows. Key elements include strict licensing of the riskiest systems, mandatory safety reviews, transparency requirements, and penalties for violations.

International coordination is also crucial, given AI's global impacts. Disparate regulations would just spur "jurisdiction shopping" as bad actors exploit loopholes. Major countries should thus agree on principles for restricting unethical AI while enabling its progress.

Most critically, effective oversight requires resisting the lobbying and legal obstructionism that delegates AI policy to corporate interests. Independence from lobbying pressures is vital. Regulators must also receive ample resources and staff expertise. Otherwise rules will prove toothless.

An ethics-focused AI regulatory agency should be created to develop and enforce standards. Leading researchers as well as civil society advocates should advise regulators to balance competing priorities.

Rigorous yet sensible oversight will allow AI's benefits while preventing disasters.

With wisdom and courage, society can foster AI's progress while restraining its risks. The stakes could hardly be higher, but the formula for maximizing benefits while minimizing harm is clear.

While prudent oversight offers hope, the hazards of unfettered AI proliferation remain severe.

Weaponized AI could wreak havoc through disinformation, systemic hacking, and oppression absent regulatory constraints. From election meddling to infrastructure attacks to biometric monitoring, ungoverned AI risks dystopian nightmares. Preventative policies guided by ethics offer the only reliable safeguard.
Already crude bots spreading false narratives through social media like Facebook have inflamed ethnic hatreds from Myanmar to Sri Lanka.

But AI generating customized propaganda could stoke vastly more chaos. Hyper-realistic fake videos portraying atrocities but lacking any factual basis would further corrode social cohesion. Only constant vigilance can minimize such manipulation risks.

On the cyber front, AI-directed hacking could launch devastating attacks compromising critical systems and services. Testing has demonstrated AI's ability to find vulnerabilities, impersonate targets and automate network intrusions. Unleashed irresponsibly, such capabilities could induce societal breakdown. Strict oversight and licensing of the most dangerous tools is essential (there is also some other research).

Ubiquitous biometric surveillance and predictive policing powered by AI also threaten oppression. China's alarming "social credit" system has been reported in the west often but Israel's RED Wolf showcases how AI-enabled monitoring tools coerce conformity. Similar infrastructures in countries adopting such system risk normalizing permanent surveillance and chilling dissent. External constraints and democratization are vital to preventing abuse.

Autonomous AI-directed weapons like slaughter-bots represent another civilizational hazard requiring controls. Allowing machines full lethal authority without human supervision effectively abdicates responsibility, enabling mindless violence at vast scale. Internationally banning such systems is an urgent imperative.

Like biotech and the internet, AI enables immense social benefits but also significant hazards if mishandled. Preventing anti-social applications while allowing pro-social ones calls for nuanced governance guided by ethics and human rights.
With wise cooperative effort, the threats of uncontrolled AI proliferation can be contained to realize its monumental potential for good.

While dangers remain, prospects for cooperative oversight offer hope.
But inclusive participation and transparency are essential to earn legitimacy and steer AI towards just path. Providing avenues for affected groups to shape governance prevents bias and unilateral agendas. And transparency mandates make powerful systems accountable. Institutionalizing inclusive cooperation and oversight can secure AI's benefits equitably.
Expanding who governs & guide AI is challenging but vital.

Multidisciplinary teams should represent diverse geographic and socioeconomic perspectives when designing socially impactful systems.

Oversight bodies must empower cultural, gender, and economic diversity to resist dominant group biases and blind spots.

Broader participation takes many forms. Crowdsourcing constitutional principles allows societies to imbue AI with shared values. Global regulatory standards developed through participatory processes earn legitimacy worldwide. Codetermination gives workers influence over automation's impacts.

Human inputs keep AI aligned with human priorities. Multipolar governance networks distributing oversight across institutions and sectors provide checks against excessive concentrations of power. Democratic deliberation through collective debate steers progress towards justice.
By opening technology's development to its effects on humanity, AI can enable liberation over oppression. But this requires determination to share agency. However difficult, inclusion and participation are prerequisites for steering this epochal innovation towards rights and wellbeing. With cooperation and courage, wise governance can secure AI's benefits for all.

By creating a "Public benefit Corp" that provide specialty mutual liability insurance market for large language models, this will ensure the companies involved be much more prudent. Part of the money raised could also be used to finance the compliance and regulatory authorities. The fund raised could be used to monitor, survey and key partners to ensure nefarious parties cannot succeed, worst case scenario, funds cannot. As LLMs become more powerful and widely deployed, there is a growing risk that they could be misused in ways that cause public harm as mentioned already, whether intentionally or accidentally. To manage this unforeseen risk, LLM providers should be required to carry insurance that covers potential public liabilities from LLM failures or misuse.

Rather than leaving this to an unfettered private market, insurance requirements should be thoughtfully designed through a public interest approach, not just regulatory capture by industry. Minimum liability coverage levels should be mandated for LLM providers proportional to the scale and risks of their systems.

To mutualize risk, pools and structures like government-backed reinsurance may be needed, as private reinsurers alone may lack capacity for systemic risks. Actuarial expertise will be critical in pricing and managing this systemic liability risk.

Overall, public liability insurance for LLM providers can help ensure that those developing and profiting from these technologies internalize the risks they create. But the insurance system must be designed proactively based on public interest, not narrowed private interests, to effectively cover potential harms from LLMs gone rogue or misused.

However, creating PBC at scale needed will be a challenging task, but it is not insurmountable, we can make it happen!

License To Kill U.S.A. Government Authorizes Killing US Citizens Any Time It Wants - https://rumble.com/v4w00i9-license-to-kill-u.s.a.-government-authorizes-killing-us-citizens-any-time-i.html

So Over 100,000+ UN Troops Being Brought In To U.S.A. As Migrant Refugees - https://rumble.com/v4wan5n-so-over-100000-un-troops-being-brought-in-to-u.s.a.-as-migrant-refugees.html

Secret History New World Order USA & UN & All Operational Powers Run By FEMA - https://rumble.com/v4w1me8-secret-history-new-world-order-usa-and-un-and-all-operational-powers-run-by.html

No Oaths Of Office In The Federal Government Today & Link To Top Info Wars Video's - https://rumble.com/v4w7sxh-no-oaths-of-office-in-the-federal-government-today-and-link-to-top-info-war.html

No Oaths Of Office In The Federal Government And Are Enemies Destroying US Government From Within As Planned On January 10th Of 1963, Forty Five Current Communist Goals Were Submitted To The Congressional Record. Number Thirteen Was, “Do Away With Loyalty Oaths.” Know One In U.S. Government Have Signed It.

Loading 19 comments...