The Instigator
Pro (for)
0 Points
The Contender
Con (against)
4 Points

Autonomous robots should be given the ability to kill.

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 1 vote the winner is...
Voting Style: Open Point System: 7 Point
Started: 3/27/2015 Category: Science
Updated: 1 year ago Status: Post Voting Period
Viewed: 605 times Debate No: 72455
Debate Rounds (4)
Comments (4)
Votes (1)




As I am the creator of this debate, I uphold myself to supply the rules of the debate, the format of the debate, and to supply definitions of the terms in the resolution.

~The use of Wikipedia as source for any argument is not allowed.
~The Pro must have a case for the Con to argue against.

~In Round 1, the Pro may simply state that they accept the debate.
~In Round 2, both the Pro and Con will only present their case.
~In Round 3, the Pro and Con will 1) Attack their opponent's case 2) Defend their own case.
~In Round 4, the Pro and Con will give reasons proving why they should win the debate.

~Autonomous robots - "...intelligent machines capable of performing tasks in the world by themselves, without explicit human control." (
~Should - "used in auxiliary function to express obligation, propriety, or expediency" (
~Ability - "power or capacity to do or act physically, mentally, legally, morally, financially, etc." (
~Kill - "to deprive of life in any manner; cause the death of; slay." (

If you decide to accept this debate, you're a wonderful person! Cannot wait to debate.


Acceptance: I accept the Resolution "Autonomous Robots should be given the ability to kill" with no exact limit placed to the level of autonomy and types of killing.

Debate Round No. 1


Before I state my case, a big thank you to the Con for accepting! Good luck to you.


I affirm: Autonomous robots should be given the ability to kill.

I abide by the current definitions given.

Contention 1: In warfare, the use of autonomous robots is more practical than human soldiers.

Ronald C. Arkin of Georgia Institute of Technology published a paper titled "The Case of Ethical Autonomy for Unmanned Systems" (1). In his paper, he argues the following for the use of autonomous killer robots: "1. ...Autonomous armed robotic vehicles do not need to have self-preservation as a foremost drive, if at all. They can be used in a self-sacrificing manner if needed and appropriate without reservation by a commanding officer. There is no need for a "shoot first, ask-questions later" approach. 2. The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans currently possess" 3. Unmanned robotic systems can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events... 4. Avoidance of the human psychological problem of "scenario fulfillment" is possible... This phenomenon leads to distortion or neglect of contradictory information in stressful situations, where humans use new incoming information in ways that only fit their pre-existing belief patterns, a form of premature cognitive closure. Robots need not be vulnerable to such patterns of behavior. 5. They can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time... 6. When working in a team of combined human soldiers and autonomous systems as an organic asset, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions."

Contention 2: Autonomous robots will be programmed with an agreeable set of morals.

In 2012, Economist Magazine released an article detailing the "rules" of a robot's capabilities (2). The article states "...autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary... it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules. Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people. The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help. Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers... Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts."

Contention 3: Autonomous robots will not experience limitations that human soldiers do on the battlefield.

Daniel. P Sukman of the National Defense University recently reviewed a new book by M. Shane Riza titled "Killing Without Heart" (3). Sukman notes "First and foremost is the absence of the empathy that will always reside in human beings. Robots lack that sense much as psychopaths do. They do not feel guilt or sympathy or any other emotion when taking a life... [Additionally, soldiers] do not return home after a deployment with post-traumatic stress disorder and beat their wives and children; they do not visit the payday loan shops and gentlemen"s clubs often found outside military installations. Financially, robots do not need TRICARE benefits, nor do they receive a pension after 20 years of service."

1 -
2 -
3 -


Preface:Given the nature of the debate my opponent has asked for, I shall reserve Counter-Cases until the Third Round.

Synopsis: The Resolution is “Autonomous robots should be given the ability to kill.”. Autonomous is a broad spectrum. It does not imply sentience. Simply independence. This fits well enough into Pro’s definition. “Kill” is also a massive range, which includes murder. Pro has not suggested limitations to killing. The use of the word “should” turns this debate into an absolute, which extends over all autonomous robots. I iterate that neither the resolution nor the opening statement describes which type.

Proposition I: Bop and Requirements to Win.

There are three theories as to whom carries BOP. First is whoever is Pro. Second is whomever is argues against a status quo. Finally the instigator. Pro fits all three. In order for Pro to win, Pro must show that we should allow, without restriction, the capacity of even slightly autonomous robots to kill. Because Pro does not provide in the Resolution nor Opening Statement any limits on the types of robots, all robots will be considered necessary to support. If even one class of robot is determined to need a “no kill” program, then Pro fails to affirm the resolution. The word “should” shows this to be an absolute (Pro’s definition supports this by showing it to be an “Obligation”). I need only show that the idea of presenting all robots the free ability to kill may not be the best option. I need not show that ALL robots need this programming.

Case I: The Ability to Hack

Thesis: In this Case I shall show that the ability to hack (either directly or over a network) in order to alter the aggression of an autonomous machine can lead to dangerous effects.

Rationalization: A hacker is a person who illegal gains access to and tampers with software. {1} These people can hack either directly or indirectly, through networks.{2} Robots, even autonomous ones, will always be vulnerable to such matters, especially if they are “cheap” mass produced variants. Simply put, someone who can hack into a machine to change it’s programming could easily cause the machine to become violent. Now Pro may try to say that they can do that regardless of programming preventing killing. Ultimately however it would take only the best to alter hard-coded programming. What this means is that the programming is built into the very OS and Source Code, {3} meaning that any alteration to it is incredibly difficult, and even the slightest mess up could lead to the OS crashing. A Blue Screened Machine is better the a killing machine. If someone could hack the machine to make it violent (not a far fetched situation), it could prove catastrophic. No matter any other form of moral built into the machine, so long as “do not kill” is not among them, the potential to kill is always an option.

Conclusion: By not hard coding the inability to hurt or kill, we risk even mildly skilled hackers from turning otherwise safe machines into dangerous weapons. It is incredibly dangerous, and should be prevented any way possible.

Case II: We Don’t Even Want Humans to Kill

Thesis: In this Case I will demonstrate that we strive to prevent each other from being able to kill. If we don’t want Humans to be able to freely Kill, we shouldn’t give Machines the ability to do it.

Rationalization: This is the strong point in the case. We strive to prevent killing by other humans. We cannot code humans not to kill, like machines, but we do everything we can. {4/5/6} We are so against killing each other that we build it into our very religions. {7/8} We put down animals who have so much as harmed a human (never mind killing). {9/10/11} We work hard to prevent the deaths of humans, so much so that we have built vast social constructs to prevent it, and many people believe we aren’t doing enough, that we should ban weapons to limit our ability to kill. {12/13/14} Why then would we put ourselves at danger by presenting a machine with the ability to do what we strive to prevent ourselves from doing. It is irresponsible to say the least. Now sure, we have certain limits to it for ourselves. We can kill in self defense (although we prefer less lethal means), but the fact is that we don’t provide such benefits to non-humans often. Why give that benefit to a machine whom, even at it’s most advanced, we aren’t even certain is truly sentient. {15/16/17} And again, it doesn’t just include sentience, but even non-sentience autonomy.

Conclusion: I put a major dent in Pros potential position by showing how counter-productive it is for us to provide the ability to kill to a machine, when we put our heart in soul into preventing ourselves from killing.

Case III: Dangers of Glitching

Thesis: In this Case, I’ll show that machines are susceptible to glitching, and without deeply implanted hard-coded rules against killing, major problems can arise.

Rationalization: Let’s be clear. All machines glitch. Glitches are malfunctions in a machine, which cause them to act in a way not originally intended. {17} A glitch can easily cause massive damage. {18} In fact, they can be very dangerous when done at the worst times and places. {19} A glitch can be caused by any of multiple things, and can affect largely any form of programming. What happens when a sentient robot glitches? Things can go wrong very fast, especially when it has no true programming against killing. A mild glitch could easily make otherwise harmless things appear dangerous to the sentient machine, causing it to act in a way which would normally not be done. Now one might mention that the glitch could still shut down the “No Kill” switch, however why increase the danger? And if it is truly hardcoded, anything which damages that protocol (especially a random glitch) will likely compromise the OS in general, causing a system failure. We should strive to protect ourselves from the dangers of a malfunctioning robot, even if it only slightly increases our safety.

Conclusion: By reminding everyone about the chance of problems forming in a machine, we only put ourselves at risk by hesitating to implement proper safety protocols, of which a “Do Not Kill” Program will be most effective.

Case IV: Some Does Not Mean All

Thesis: In the Case, I shall quickly point out how giving most machines the programming against killing doesn’t mean having to give them all that programming.

Rationalization: Pro never mentions any specific quantity of exception to his resolution. Therefore he must show that all autonomous machines should be given the option to kill. I however point out that I need not prove all of them need the “do not Kill” option. Any program against any form of killing conflict with his resolution. Pro also does not state in Opening Round any exceptions to what kind of killing (thus murder is an option), in fact his definitions pretty well state killing in “any manner”. No matter what kind of machine we are talking about, we should, on principle, include programs against most forms of killing. Even killing machines (such as for war) must have programs against friendly fire. I’ll point out that his source for “autonomous robots” mentions Roombas being included. I think a “no kill” program is probably best to include if even that qualifies.

Conclusion: I show here (in a quick iteration) that even small restrictions against killing in any substantial percentile of Autonomous machines would negate Pros resolution. Therefore he must somehow justify this.

Closing Statement: In these four cases, I have shown multiple flaws in Pro’s case, including how far a spectrum his definitions reach, and the issues of allowing a machine with little qualifications of sentience to be able to kill, for any reason. I leave Pro the momentum task of somehow countering these matters. As per Pros instructions, my Counter-Cases will be made next turn. Best of luck to Pro. Your turn.



Debate Round No. 2


Hey Con, I'm giving the floor over to you. Please post what you have to say.


Preface: In accordance with the rules set by Pro, I will post only counters, and not Cases. In turn I remind voters that, given the structure, I cannot counter his rebuttals, only his cases (as he has no chance to counter mine). Brief summaries against his Rebuttals may be permitted final round.

Counter-Case I: “In warfare, the use of autonomous robots...” And “Autonomous robots will not experience limitations”

Summarization: Pro spends this Cases arguing that Soldiers are better on the battlefield. Including mentioning their willingness to self-sacrifice, and lack of human limitations. I will be dealing with both Contentions I and II, as they are functionally the same matters.

Counter: Pro’s entire Case is a quote. This makes the entire display an appeal to Authority, {1} and in truth plagiarism. {2} Pro isn’t arguing anything, Ronald C. Arkin is. But pushing that aside, Pro has made the fatal mistake of talking about Autonomous War Machines, whereas the Resolution and Opening Statement (ROS) never mentions that this is only for Military Purposes. His ROS simply states autonomous robots, which includes civilian models. But again, moving on, Pro is discussing the value of Autonomous Machines on the battle field.

I however will counter all of these through the use of an alternative. Instead of having purely autonomous machines, I will suggest instead “semi-autonomous” Remote Machines, similar to UAV. {3} These machines could still maintain many of the benefits of normal soldiers, while still requirement User Controls. An example may be the Foster-Miller Talons. {4} These machines bear almost every single benefit of a machine (no tiredness and such), while still maintaining the moral capacity of a human.

Now Pro actually tries to make the lack of empathy and such to be a good thing in war. However he ended his first Case by saying that these machines could lead to a decrease of “ethical infractions”. These presents a problem. Machines who don’t have ethics are the most likely to commit ethical infractions because they view it the “smarter”. This could lead to the killing of many innocents simply because they could be considered a waste of time by the machine. Unless of course the Machines are programmed with morals, which in turn contradicts Pros case about lack of empathy.

Now I’ll end this by pointing out what I’ve stated in my Cases. Autonomous machines can be hacked or suffer glitches. I’ll quickly point out that complex machines are more susceptible to glitches than simpler ones, and Pros autonomous war machines would require some of the most complex OSs in the world. Certainly we could “perfect” it, but in the mean time many people could die due to glitches. But the true problem in hacking. {5/6/7} Now some say they’ve developed “Hack-Proof” UAVs, however this is simply an attempt to garner favour. Any machine bound to a network can be hacked. If it isn’t attached, then it would completely free from our control. A machine which is hacked could easily give the enemy information about troop locations (which it would be stored with), technology (used in the machine), and could be used against us. It wouldn’t be farfetched for a single gap in the machines defenses to allow entire units to be hacked at once, depending on it they operate on a unit basis (which is the smartest system). And before anyone thinks about Firewalls and such being advanced enough, I’ve already given sources showing UAVs being hacked.

I lied. One more concept. I hesitated to use it, but I figure that a rusty nail can still kill. There is the problem that a machine with enough autonomy may be able to shrug of human control. {8/9} The only true way to ensure something like this does not happen is to not make it truly autonomous, but rather (as I’ve stated) Remote Controlled. I’ll point out that Remote Control could still leave room for auto aiming, such as with tanks, but requiring the “Remote Pilot” to actually “pull the trigger”. Just like those horrible people who use auto-aiming on Multiplayer games. Also Pros last source is a book review.

Conclusion: Simply enough, These Counter shows that use of RC Machines in warfare carry multiple benefits over high autonomy machines. I also pointed out that Pros ROS did not state that this was limited to war machines. Even if Pro wins on Warfare, he still needs to show the same for Civilian Models.










9] Navy-Funded Research:

Counter-Case II: Programmed with an Agreeable Set of Morals

Summerization:Pro attempts to explain that we can simply program them with morals.

Counter: Again, Pro uses a quote to make his entire case. Moving on, Pro tries to say we can program morals. This however disaffirms the resolution. If they are programmed with morals against killing, then they are programmed not to kill (as Pros definition, states ”to deprive of life in any manner”, and Pro accepted my pointing out of it). This means that, in order to maintain his stances, he must mean programming them with morals other than not killing, and hoping that those morals will somehow suffice to prevent death. Too tightly placed, and it counts as being programmed not to kill. Not tight enough, they may lead to death of others. If they receive tightly controlled morals, then we risk rigidity in morality hampering them. If they are programmed with morals that can shift based on events around them, then we risk them adjusting their morals in such a way that allows for killing. Of course denying them a morality against killing would require incredibly levels of reasoning. Ensuring Pro affirms the resolution is not enough of a reason.

I’ll reiterate here that, as I’ve stated prior, there is the question of whether any number of machines are worth a human life. Should we give them the right to lethal self-defense? As I’ve stated, autonomous does not mean sentient. And a machine can be programmed for non-lethal defense. In fact this is easier for a machine then it is for a human. In every situation (non-military) where violence may be needed, a machine can suffice with non-lethal response. For what reason would we ever risk our friends and families lives just so that we can give machines the right to kill?

Conclusion: I’ve shown that Moral programming will either disaffirm the resolution, or it will need to exclude a “no killing” rule so as to allow killing, as well as having morals loose enough to ensure killing is an option. Either Pro has disaffirm the resolution, or he is asking for the irresponsible action of programming morals that doesn’t include morals against taking a life.

Epilogue: Simply put, all of Pros cases are from quotes. Also Simply put, Pro has mostly worked to show explanations for military machines, and I’ve shown reasons why there are better alternatives, as well as reminding Pro that this is not about just military machines, but also civilian machines. Pro tries to bring up morals, but instead disaffirms his own resolution. I also remind Pro that in every non-military situation, killing is better substituted with Non-lethal action. Ultimately Pro fails show why we would give all machines the ability to kill, and so fails to affirm his resolution for all autonomous machines. Good luck to Pro in the Final Round.


Debate Round No. 3


gavinjames10 forfeited this round.


Epiloque: My opponent has effectively forfieted this debate. I have refuted all his cases, and shown that they do not cover all machines. My Cases have been dropped, and in the end the Opponent has failed to offer explanation for why he should win.

Vote Con.


Debate Round No. 4
4 comments have been posted on this debate. Showing 1 through 4 records.
Posted by Unitomic 1 year ago
Pro, you cannot use just quotes to build your cases. If you need to, paraphrase it. But Plagiarism is frowned upon.
Posted by Unitomic 1 year ago
Given the structure of this debate, I have everything I need to write my next round. Needless to say, I'll be ready to post as soon as he does his round.
Posted by Arcanas 1 year ago
Why not change it to they should be able to kill and put yourself as pro? What's the point in having a double negative?
Posted by Ragnar 1 year ago
Was hoping you were pro. Deathbots rule!
1 votes has been placed for this debate.
Vote Placed by kingkd 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:04 
Reasons for voting decision: FF