Saturday, May 02, 2015

The Rise of the Machines





I am extremely pleased to announce my first guest blogger for the site. The following is an essay by C2C Robert Graves Jr for his philosophy class at the Air Force Academy. To say that I'm proud of the boy would be an understatement.



C2C Robert Graves                                                                                                  
The Rise of the Machines

Intro

                Noel Sharkey’s “Saying ‘No!’ to Lethal Autonomous Targeting” argues that the move from ‘man-in-the-loop’ to ‘man-on-the-loop’ is a dangerous one and that there will be an increase in moral issues raised as a result. Sharkey claims that the current usage of remote piloted robot planes and drones indicate that future robotic platforms could be misused by extending the range of legally questionable, targeted killings by security and intelligence forces.  I propose that lethal autonomous unmanned systems will potentially be capable of performing more ethically on the battlefield than human soldiers and that their progression should not be stopped. If there ever is a point where these unmanned systems achieve better-than-human performance it may result in a decrease in civilian casualties and is therefore worth pursuing. In this paper, I will first summarize Sharkey’s argument against the advancement of Lethal Autonomous Systems. Secondly I will present three objections to Sharkey’s article. Finally I will present two practical implications of my objections to Sharkey’s article.

Summary of article

                Sharkey creates an argument against the advancement of Lethal Autonomous Systems throughout his article. Sharkey does this by first explaining the trend of the United States Military towards Lethal Autonomous Systems.  Next Sharkey observes that ‘Man-in-the loop’ have shown to be a step toward ‘man-on-the-loop’ systems and eventually Lethal Autonomous Systems. By first looking at the ethical dilemmas of ‘Man-in-the-Loop’ and ‘Man-on-the-Loop’ systems, Sharkey predicts future ethical debates about Lethal Autonomous Systems and argues that they should not be pursued as viable military assets. Although Sharkey does acknowledge the obvious military advantages to implementing Lethal Autonomous Systems, these should not be exploited due to ethical concerns that are apparent through ‘Man-in-the-Loop’ and ‘Man-on-the-Loop’ systems. The following reconstruction reproduces the argument that the use of Lethal Autonomous Systems should not be allowed in warfare.

                1) The United States Military has been developing ‘Man-on-the-Loop’ Systems (p)

                2) ‘Man-on-the-Loop’ Systems are impractical without the development of Autonomous Systems (p)

                3) With the United States current goals, Autonomous Systems are inevitable (1, 2)

                4) ‘Man-in-the-Loop’ Systems remove two obstacles of war that previously prevented killing without considering the full consequences (p)

                5) Problems presented by ‘Man-in-the-Loop’ Systems will be exacerbated by ‘Man-on-the-Loop’ Systems. (p)

                6) The alleged moral disengagement by remote pilots will only be exacerbated by the use of autonomous robots (4, 5)

                7) Autonomous Systems cannot implement the principle of discrimination (p)

                8) Autonomous Systems cannot implement the principle of proportionality (p)

                9) The international community need to address the difficult legal and moral issues now, before the current mass proliferation of development reaches fruition (3, 6, 7, 8)

                Since the creation of Weapons, they have evolved to enable killing from increasing distances. This is made apparent by the evolution from rocks to the spear to bow and arrow to cannons all the way to long range bombers during WWII. Today militaries are separating their personnel from the battle field through the use of robotics and Unmanned Aerial Vehicles or UAVs. There has been an obvious push for more robotics and UAVs in the past ten years. In the Iraq and Afghanistan conflict thousands of robots were used compared to the 150 in 2004 (Sharkey).  The undisputed success of UAVs for gathering intelligence has created an insatiable military demand for UAVs. This demand has spread to over 40 countries that either produce their own systems or buy them from other countries.  These systems are still in the infantile stage and their true capability and what they evolve into is not yet known. However it can be predicted that militaries will want to use robotics and UAVs as a force multiplier that will allow one individual to control multiple systems or even to the point where systems will be able to make decisions for themselves.  It would be at this point where a system is considered to be autonomous. These robots will not be like something out of Terminator but will instead be able to gather data from their sensors and then make decisions based on an algorithm to deliver deadly force. It is an important distinction between the use of autonomous in philosophy and politics and how it is used in this sense. 

                There would be four reasons why a military would desire the use of an autonomous system over one that is controlled by human. Sharkey states these as “(i) remote operated systems are more expensive to manufacture and require many support personnel to run them; (ii) it is possible to jam either the satellite or radio link or take control of the system. (iii) one of the military goals is to use robots as force multipliers so that one human can be a nexus for initiating a large-scale robot attack from the ground and the air; (iv) the delay time in remote piloting a craft via satellite (approximately 1.5 seconds) means that it could not be used for interactive combat with another aircraft.” (Sharkey) These obvious limitations to ‘Man-in-the-Loop’ and ‘Man-on-the-Loop’ make the likelihood of autonomous systems entering the battlefield ever higher.

Man-in-the-loop: problems

                A Man-In-the-Loop system is one that a human is consulted with every action, such as a UAV. The United States has led the field with its armed drones. The Predator MQ-1 equipped with two hellfire missiles and its brother the MQ-9 Reaper that can be equipped with up to 14 Hellfire missiles are controlled by the 432nd Air Expeditionary Wing out of Creech Air Force Base in the Nevada desert. Since the first Predator took flight in 2001 there has been a sharp increase in the number of pilots that operate these systems. In 2009, the number of remote pilot operators trained outnumbered the number of conventional pilots. (Sharkey)

                Sharkey argues that the use of these UAVs has definitely “alleviated one of the two fundamental obstacles that war fighters must face – fear of being killed” and possible a second. By taking away this fear of being killed these airmen have no reason to retreat. This has made them much more dangerous especially because a military force is most vulnerable when retreating.  The second element that Sharkey argues is removed by UAV use is resistance to killing. It was discovered after WWII that most men are not ready to kill. Through the analysis of both hit rates and interviews with soldiers after major battles in WWII, it was shown that on the ground, soldiers found killing to be a difficult task. However Sharkey points out that operation of UAVs encourages a ‘Playstation’ mentality. These operators rarely see the faces of those they have killed and are looking at them through a computer screen very similar to a video game. Many airmen that are flying these drones have found that it was unexpectedly easy to kill someone with the use of a UAV. By separating the operator from the battle, Sharkey points out that the operator does not consider the morality of each kill. Instead it is a simple job that the operator goes to every day after which he or she returns home for dinner with their family. In conclusion Sharkey claims “developing technologies of this sort also have the potential to provide for the creation of moral buffers that allow humans to act without adequately considering the consequences.” This would apply to autonomous systems as well as those controlled by humans. By removing the elements that made it difficult to kill another human being, there is the possibility to not fully understand the results from an action.

Man-on-the-loop

                “The most recent United States Air Force Unmanned Aircraft Systems Flight Plan 2009-20474 opens the strategy for a staged move from current remote piloted systems to fully autonomous systems.” (Sharkey) There will be a transition from unmanned drones being controlled by operators all the time to the drones controlling themselves for landing take-off and re-fueling. As more advancements are made, humans will no longer be “in the loop” or being consulted for every move, instead they will be “on the loop”. “On the loop” means that humans will monitor the execution of certain decisions and the AI of the drone will carry out those decisions within legal and policy constraints without human input. Sharkey raises one strong issue that a human will not be able to make all of the decisions to kill. As pointed out before, the ability to command these drones is slow and does not work well in a combat situation. Eventually, for these drones to be effective in a combat situation they will need to be able to decide whether to take action or not, essentially autonomous.

How they relate

                Sharkey has shown that there is an obvious trend towards Lethal Autonomous Vehicles. He has also shown that UAVs while still controlled by humans do not have two of the fundamental obstacles to war fighting; fear of death and fear of killing. By removing these two elements, airmen are less averse to killing the enemy without consideration for the morality of each kill. Finally Sharkey has shown that the transition from ‘Man-on-the-Loop’ to Autonomous is inevitable: that ‘Man-on-the-Loop’ is not practical without the transition to Autonomous Vehicles. In Sharkey’s conclusion he explains that this is bad because we cannot trust Autonomous Vehicles to kill another human.

 First Sharkey relies of the principle of discrimination. The principle of discrimination is that no Autonomous Vehicle has the capability to determine between a civilian and insurgent. This issue becomes even more important when we are fighting a force that does not dress in uniform and hides among the civilian population.  Sharkey does acknowledge the extensive amount of sensors, cameras, and facial recognition programs that can be utilized by a drone but these can be rendered useless by a simple ski mask or hooded jacket. Sharkey argues “In a war with non-uniformed combatants, knowing who to kill would have to be based on situational awareness and on having human understanding of other people’s intentions and their likely behavior. In other words, human inference is required. Humans understand one another in a way that machines cannot. Cues can be very subtle and there are an infinite number of circumstances where lethal force is inappropriate. Just think of children being forced to carry empty rifles or of insurgents burying their dead.”

Second Sharkey explains the Principle of Proportionality. Sharkey applies the Principle of Proportionality in that an Autonomous System cannot perform the human subjective balancing act required to make proportionality decisions. There is no way possible to give an insurgent a numerical value that could be compared to the number of civilian casualties. When a commander makes a decision he must first weigh all of the options and then decide which is the best course of action to take. These decisions could not be done by an algorithm and therefore could not be left up to a computer. While humans do make errors, Sharkey claims that humans can be held accountable. It would be impossible to hold a drone to blame for an action that was does unethically.

Objections

                Sharkey’s argument that the moral issue of Autonomous Systems needs to be addressed is sound. However I would like to raise objections to three of his points. First I will address the claim that moral disengagement by remote pilots will only be exacerbated by the use of autonomous robots. Secondly I will oppose Sharkey’s point that ‘Man-on-the-Loop’ Systems are impractical without the development of Autonomous Systems. And finally I will object to his claim that Autonomous Systems cannot implement the principle of discrimination. Once I have objected to these three claims made by Sharkey, I will present two practical implications of my objections to the article.

Sharkey’s claims that the by removing two of the obstacles common to all warfighters through the use of UAVs that the implications of each kill will not be fully considered. While it is true that for the first time in history the fighter does not have to fear death and that while the resistance to killing may not be completely gone, it can be said it has diminished. This does not mean that humans will act “without adequately considering the consequences.” (Sharkey) On the contrary, now that these obstacles have been removed, specifically the fear of death, the warfighter can now focus more on the implications of an act without being distracted by instinct. These UAVs do not need to have self-preservation as their foremost drive like humans do.  They are able to act in a self-sacrificing manner without any reservation, carrying out the commander’s intent without distraction. The fear of death does not instill in the warfighter a sense of what is right and what the consequences of an action will be. By removing this obstacle, the ‘Man-on-the-Loop’ or even Autonomous System will be able to adhere to the rules of engagement with more precision. With these obstacles gone there will no longer be the need for a ‘shoot first, ask-questions later’ approach.

                My second objection is to Sharkey’s claim that the development of ‘Man-on-the-Loop’ Systems will inevitably lead to Autonomous Systems. Sharkey supports this claim by explaining that the time delay between operator and system is too long for a practical implication of these UAVs especially when fighting with other aircraft. However, there is no longer a case where aircraft are fighting other aircraft.  These UAVs are currently being used for reconnaissance and the destruction of ground targets. While ground targets may be in moving cars, the time delay does not impede on the UAVs mission to destroy the target. While UAVs may not be able to destroy other aircraft in a traditional dogfight, the likelihood of a UAV getting into a dogfight is slim. The creation of ‘Man-on-the-Loop’ Systems does not necessarily imply that there will also be Autonomous Systems. The creation of an Autonomous System that kills without consulting a human would not be an improvement over ‘Man-on-the-Loop’ Systems because the problems it would solve are not critical factors. The implementation of an Autonomous System is simply not needed in today’s world or in the near future. 

                Finally I object to Sharkey’s claim that an Autonomous System would not be able to implement the principle of discrimination. Sharkey makes this argument by saying that an Autonomous system could not differentiate between a “child being force to carry empty rifles” (Sharkey) and an insurgent. These UAVs can gather much more information than any human possibly could. “This data can arise from multiple remote sensors and intelligent (including human) sources, as part of the US arms network-centric warfare concept and the concurrent development of the Global Information Grid.” (Arkin) While it is true that no intelligence is perfect, and that mistakes will be made, these machines will be able to consider all of the intelligence and form conclusions that would be impossible for humans. A simple ski mask or hooded sweat shirt would deter any human just as much as it would an Autonomous System. Furthermore an Autonomous System does not have any preconceived profiling that a human is subject to. The common mistake of profiling a subject based on race would not affect these Systems.

Implications

                While Sharkey’s argument does have some flaws in its premise, the overall assumption that UAVs will change is correct. These Systems whether they evolve to fully Lethal Autonomous Systems or not will change the way our military operates. They have removed the warfighter from the battle field and hopefully lowered casualties, both civilian and military. By allowing the progression of these systems they will only improve their effectiveness. If these systems have the desired outcome of localizing the destruction of war only to those who are attempting to destroy peace they will effectively increase the overall happiness of both the civilian population that is affected by the war and the military forces that are attempting to restore peace. As John Stewart Mill explains in Utilitarianism “The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure.” (Mill) These systems should be allowed to progress, not without scrutiny, but progress all the same because they are an attempt to limit the destruction that is caused by war.

                While the hope with these systems is that they will promote net happiness by reducing the pain caused by war, that can only be achieved if our military leaders employ them correctly and morally. By perfecting this tool for our military, more responsibility will be placed on our leaders to use them with a moral code that will benefit the mission in particular and the country at large. No longer will our leaders be able to provide only a mission goal and leave the implementation up to the interpretation of their subordinates. What is morally correct will now be directly determined by the leaders in control of these systems. Following established decision matrices such as described by Dr. Jensen in “Hard Moral Choices in the Military” will help augment these decisions. However it is imperative that our leaders be taught the application of ethics to help them when dealing with these decisions. 



Citation
Arkin, Ronald. "The Case for Ethical Autonomy in Unmanned Systems." Journal of Military Ethics 9.4 (2010): 332-41. Taylor and Francis Online. Web. 1 May 2015. 

<http://www.tandfonline.com/doi/full/10.1080/15027570.2010.536402>.

Mark N. Jensen (2013( Hard Moral Choices in the Military, Journal of Military Ethics, 12:4, 341-356, DOI:10.1080/15027570.2013.869897

Mill, John Stuart (2012-05-17). Utilitarianism (p. 11).  . Kindle Edition.

Sharkey, Noel. "Saying ‘No!’ to Lethal Autonomous Targeting." Journal of Military Ethics 9.4 (2010): 369-83. Taylor and Francis Online. Web. 1 May 2015. 

<http://www.tandfonline.com/doi/full/10.1080/15027570.2010.537903>.


               





No comments:

Post a Comment

I welcome feedback. If you have any comments, questions or requests for future topics, please feel free to comment. Comment moderation is on to reduce spam, but I'll post all legit comments.Thanks for stopping by and don't forget to visit my Facebook page!

Capt Rob