The Instigator
Pro (for)
3 Points
The Contender
Con (against)
9 Points

Robots sent to destroy human life on Earth would be more moral than humans are capable of being.

Do you like this debate?NoYes+1
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Vote Here
Pro Tied Con
Who did you agree with before the debate?
Who did you agree with after the debate?
Who had better conduct?
Who had better spelling and grammar?
Who made more convincing arguments?
Who used the most reliable sources?
Reasons for your voting decision - Required
1,000 Characters Remaining
The voting period for this debate does not end.
Voting Style: Open Point System: 7 Point
Started: 5/29/2011 Category: Philosophy
Updated: 5 years ago Status: Voting Period
Viewed: 1,375 times Debate No: 16767
Debate Rounds (4)
Comments (7)
Votes (2)




Burden of Proof

The burden of proof will be shared between my opponent and I. In order to win, one must not only refute the other's arguments, but must also bring their own case in favor of their respective claims. I will argue the affirmative of the resolution and my opponent will argue the negative.


Robots: metallic beings programmed to rape, murder, torture, and pillage the human species until it is completely erradicated. These robots would also especially target those who are unable to defend themselves, such as babies, children, cripples, and elderly people.

Destroy: any means used to erradicate or hurt the human species, including but not limited to murdering, torturing, raping, pillaging, stabbing, purple nurpling, and other varieties of hurtful deeds.

Human life: any member of the species, Homo Sapiens. Including but not limited to children, toddlers, newborns, fetus', the elderly, librarians, gays, straights, asians, libertarians or any other being who could be considered as part of the human species.

More moral: to perform moral actions more often than that which one is being compared.

Morality: conforming to the rules of right conduct.

Other clarifications

In the scenario in which we will be arguing, no divine creator or deity will exist. One of the reasons being that it is unable to help the humans in time of attack.

My definitions and other clarifications will not be up to debate. Once one accepts this debate, they necessarily accept my definitions and other clarifications.


What the heck. I've no idea where Pro is going with this, but I hope it'll be a good debate!

My argument will be simple:

According to the vast majority of accepted accounts of 'right conduct' (I hope Pro will elaborate on this next round, but I reserve the right to challenge his definitions since 'right conduct' was not defined in the first round) 'murdering, torturing, raping, pillaging, stabbing, purple nurpling, and other varieties of hurtful deeds' are considered immoral.

We have three choices:

1. The robots are not moral agents and are thus amoral. Humans have the capability to be moral and refrain from immorality. Humans therefore are capable of being more moral than robots are capable of being.

2. The robots are performing immoral actions, and humans at the very least have the capability to refrain from performing those acts to the same extent or at all, so humans are capable of being more moral than robots.

3. Morality does not exist. I'm not sure about this because R1 seems to assume the existence of morality, but if this is true then neither robots nor humans are any more moral than each other and the resolution is negated by virtue of that fact.
Debate Round No. 1


For my argument I will argue two positions. The first will be that the robots which I described are acting morally when they rape, pillage, murder, ext. The second will be that human beings are incapable of acting morally.

AP 1: Morality is conformity to the right rules of conduct.

This is simply the agreed upon definition of morality. If my opponent wishes to argue over the interpretation of this definition, that would be fine.

AP 2: Being programmed for a specific purpose, robots become moral agents.

To validate this premise, we must look at what a moral agent is. A moral agent is generally defined as a being capable of acting in a right or wrong way. Since robots have a set purpose, they are capable of acting in a right way(killing, raping) or in a wrong way(not killing or raping). Whether they actually have free will over the matter is not important in this case because the criteria of moral agency is simply to be capable of acting in a right or wrong way. A robot can act in a wrong way by simply malfunctioning or breaking. Also, it being generally accepted that humans are capable of acting in a right or wrong way while the debate over they have free will is still ongoing is testament to free will not being necessary for moral agency.

AP 3: Robots are fulfilling their purpose by killing, raping, ext. and so are acting in regards to right conduct.

If their purpose is to kill, rape, ext. and they kill, rape, ext. then they are logically fulfilling their purpose.

AC 1: By fulfilling their initial purpose, robots are conforming to the right rules of conduct and so are acting morally.--- From Ap 1, 2, and 3.

BP 1: Humans have no objective purpose to follow in terms of right or wrong conduct.

To challenge this premise, my opponent simply needs to provide a normative ethical claim that is able to bypass the is-ougt problem. Meaning that it is able to go from an 'is'( how the world is) to an 'ought'( how one should act).

BC 1: Humans are incapable of acting in either a right or wrong way.--- From AP 1 and BP 1.

This is the obvious conclusion one would draw when a non-subjective purpose is needed in order to act morally, and there is no non-subjective purpose. Therefore humans are incapable of acting morally. I will go into further detail as the debate progresses and when I see the specific objections Con brings. Vote Pro!



I thank Pro for his interesting round, and will now offer my counterpoints. I will be taking two routes in this debate, both of which I think are sufficient to defeat Pro's argument.

1: Rejection of AP 2.

Oddly, Pro seems to have a massive contradiction in his round. He argues that humans have no objective purpose because of Hume's so called 'is-ought' gap. Now, I will demonstrate that this gap can be (fairly trivially) overcome shortly, but let me first point out that this argument applies in the exact same way to Pro's robots! The descriptive ('is') fact that the robots were designed for killing and murdering does not endow them with objective moral standards, any more than the fact that humans happen to have evolved to act in certain ways endows them with objective standards.

Pro has not only failed to provide grounds for thinking his hypothetical robots have objective moral purpose, but in doing so has explicitly contradicted his argument against humans having moral standards. Pro seems to have a very odd understanding of moral objectivity: a being has morally objective facts if it was programmed in some way to perform certain actions. This is clearly false - a printer does not have a objective moral obligation to print just because it was programmed to print.

BP 1: Overcoming the is-ought gap.

Say I define morality, as J.S. Mill, the classic political and ethical philosopher did: a form of utilitarian hedonism, where one ought to act to maximise universal happiness, often with respect to certain rules established to generally promote general happiness. So we have a stipulative definition for morality. Now here's the trick: how does one establish that one 'ought' to be moral, given that definition? It takes the form of a 'hypothetical imperative', an idea elucidated by Kant [1], which looks like this:

If I want to bring about X, then I ought to do Y.

For example: If I want to keep my car in good condition, I ought to take it to the local mechanics once in a while.

The 'ought' is conditional on us willing a certain end, and once this end is willed it becomes an empirical matter as to how one ought to bring it about. Therefore, once we gave any given example of 'good conduct' in mind, we can overcome Hume's gap like this: If I want to act according to the moral principles stipulated, I ought to work to maximise happiness (or whatever). Now, as the why one would want to do that: that's a different debate, and one which would require us to be careful about what kind of moral system we choose. However, all that Pro has required I do to overcome this point is overcome Hume's 'is-ought' gap and provide an objective moral standard (hedonistic utilitarianism) according to which human *could* in principle be moral.

To sum, we have two strong reasons to reject to Pro side of the debate: 1. Pro has provided no grounds for thinking robots have objective moral purpose, and has contradicted his reason for rejecting human centered morality in doing so and 2. He has not provided good reason to believe that humans cannot have objective moral standards.

Thank you for reading. ^^

Debate Round No. 2


Rejection of AP 2

My opponent believes that the is-ought problem is being inconsistently applied to humans but not to robots. However the difference between humans and robots in this scenario is that humans were not made for a purpose while the robots were. It is therefore possible for the robots to deviate from what they 'ought' to do and therefore they can be moral or immoral. As to my opponent's example of a printer, I agree that it has no moral obligation to do what it was programmed to do, however not doing the thing that was the point of it's programming is wrong according to an objective code of what it 'ought' and 'ought not' to do.

BP 1: Overcoming the is-ought gap

My opponent then believes that humans are able to get around the is-ought problem. However his argument has much to be desired in that it's main premise relies on the idea that "If I want to bring about X, then I ought to do Y". His critique that he brought against robots being capable of morality was that there was nothing objective about it. But then he builds a human response to the is-ought problem completely out of subjective feelings? If 'ought' truly is conditional(which it cannot be if we are speaking of 'objective' purpose) then morality is simply a clash of wills. It is an inconsistent philosophy that cannot be applied universally, thus contradicting itself seeing as it draws heavily on Kant.

Summing up

At the end of his case, Con points out two reasons which he believes will lead voters to reject my argument. They are as follows.

1. Pro has provided no grounds for thinking robots have objective moral purpose, and has contradicted his reason for rejecting human centered morality in doing so

My grounds for objective moral purpose are exactly what separates human and robot morality. Robots are designed to fulfill a certain purpose, and in going against it are doing what they 'ought not' to do. This is why humans are incapable of 'objective' morality. To be objective, something must be existant without opinion. My opponent's morality is completely derived though from subjective values.

2. He has not provided good reason to believe that humans cannot have objective moral standards.

As I exaplained above, objective means true without regards to opinion. Humans came about with no regards to an objective purpose and so any such values that one derives are completely subjective.

3 minutes to spare!


I thank my opponent for his response. I will offer my responses to his counterpoints thusly:

On AP 2

Pro points out that the robots were created for a purpose whereas the humans were not. However, this is completely irrelevant; the fact that robots were designed for a particular purpose is a descriptive fact, not a normative one. Pro's argument against humans having objective moral codes was based entirely on Hume's is - ought gap: the observation that people often illicitly move from an 'is' (slavery causes suffering) to an 'ought' (slavery is wrong). Pro falls into his own trap: he moves from an 'is' (robots were designed for killing) to an 'ought' (robots ought to kill). As far as this goes, I agree that this is a valid argument. One cannot move solely from the descriptive fact that robots are programmed to kill to the normative fact that robots ought to kill.

Pro makes a confused distinction between what the printer 'ought' to do and what it has an 'obligation' to do - the very nature of an obligation is that it is something that one ought to do, so this point fails as well. Besides, it can be easily substituted - according to Pro's definition, a printer 'ought' to do what it is programmed to do. This is an obviously false consequence that reveals the failure of Pro's argument.

Overcoming the is - ought gap.

I will respond to Pro's misplaced response shortly - however, it is worth pointing out that even if my account of the is -ought gap fails, Pro still loses this point. For the definition provided of morality was 'conforming to the rules of right conduct'. Once one has in place an account of right conduct, it does not matter whether the is - ought gap can be crossed. For humans would still be 'moral' by conforming to these rules of right conduct and thus would be capable of being more moral than Pro's robots.

However, does his critique of hypothetical imperatives succeed in any case? Not at all, because he misunderstands the nature of my claim. It is not the hypothetical imperative that is supposed to be objective in this case - the hypothetical imperative simply crosses the gap to the objective standard of morality, which was supposed to be Mill's utilitarianism.

Put it this way - facts about which actions produce the most happiness are objective. There are actual, objective answers to questions of the sort 'will this action produce more happiness than suffering?' that do not depend on subjective opinion. Taking this account of right conduct, humans can hypothetically perform actions which produce more happiness than suffering and thus be moral: more moral than the robots.

Now Pro's objection was that we cannot cross the gap from the statement 'X action will produce more happiness than harm' to the statement 'I ought to do X'. Now, this has nothing to do with whether or not humans can be moral - but whether there is any reason they ought to be moral. My claim was that this can be achieved with a hypothetical imperative - which is itself subjective (how could motivation be anything else?) but is used to provide a reason to conform to the objective standard of morality, thus crossing the is - ought gap.
Debate Round No. 3


On AP2

Con, in his refutation of AP 2, does not seem to understand what morality is and thus, who is capable of it. Morality is conforming to the right rules of conduct. The robots in this scenario were built for a specific purpose and thus, if they do it correctly, they are conforming their objective code of conduct and are thus acting morally. The fact that the robots were created for a purpose is both a descriptive and a normative fact in this case. The descriptive fact that they were designed for a specific purpose entails a normative fact(that they ought to kill humans). If they do not kill then they are not conforming to the right rules of conduct.

Overcoming the is - ought gap

Con argues that while the base of any hypothetical imperative is based subjectively, conforming to it could be measured objectively. However when the base of a moral code is itself subjective, conforming to it would in any case be based off of subjectivity. The difference between a hypothetical imperative for robots and a hypothetical imperative for robots is that humans make their own hypothetical imperative based on their subjective preferences while the hypothetical imperative set forth for robots is set by those who designed them and so is on a different plane entirely. A printer does not have an 'obligation' to print because it is completely inanimate. The robots described have at least a degree of sentience.


I thank my opponent for his last round, and will now offer my final counterpoints. Pro has not given much to respond to, but I will answer as best I can. The two sections seem to have merged somewhat, so I won't be splitting them up.

On AP2 & Overcoming the is - ought gap

Let me note that Pro has not responded to the point that crossing the is - ought gap is irrelevant - what matters is whether the humans can conform to a standard of right conduct, which Pro has tacitly conceded by not responding to. This point simply wins me the debate, because it shows humans are *capable* of being more moral than robots, even if I cannot show that they *ought* to. Which that in mind, let's look at what Pro did respond to.

My point has been that Pro has inconsistently applied Hume's is - ought gap to humans and not the robots. If Pro cannot show that there is a disparity between the two cases that results in robots crossing the gap but humans not, then Pro has succeeded. Has he? Not even slightly; in fact, Pro's attempt to overcome it is laughable, boiling down to bare assertion. He argues that whereas the hypothetical imperative crosses the is - ought gap in the case of robots, it does not in the case of humans. Why does it succeed in the case of robots? He says the creators 'gave' the robots a hypothetical imperative. This is the misunderstand the nature of a hypothetical imperative: but let's first note the issue of whether the people who 'gave' the robots their hypothetical imperative were themselves beings with subjective preferences; I can't think of how they could be anything else, so again Pro falls into his own trap - by his own argument, the standards the robots conform to are subjective.

But does crossing the is - ought gap using a hypothetical imperative render the actual standards of right conduct subjective? No, because the question of why you ought to conform to a standard has no bearing on what the standard actually is. Claiming that the way you come to conform to a standard changes the standard itself is as fallacious as claiming that the path you take to the local grocery store changes the local grocery store. It's a complete non - sequitur.

Finally, the printer and the robots. Pro claims out of the blue that the robots are sentient - well firstly, whether or not robots even can be sentient is a contentious issue - see The Chinese Room [1] thought experiment. Secondly, Pro has just invented this now - nothing previously entailed that his robots were sentient, and it's usually against the rules of conduct to bring up new arguments in the final round. Besides, he still doesn't address the broader point, that it's absurd to claim that programming endows the thing being programmed with an objective moral standard.

To sum:

1. Pro did not show that crossing the is - ought gap is necessary. Even if I can't show that people ought to conform to rules of right standards, this does not entail that they are not capable of doing so. Pro dropped this point.

2. Pro has not shown that robots are capable of crossing the is - ought gap, or that humans can't while robots can.

3. Pro has not addressed the point that programming clearly does not entail objective moral obligations.

Vote CON.

Debate Round No. 4
7 comments have been posted on this debate. Showing 1 through 7 records.
Posted by Cliff.Stamp 5 years ago
A Japanese guy married a sim, I don't see why you can't marry that salamander.

The later rounds will be interesting.
Posted by Merda 5 years ago
That'll be the day.
Posted by Kinesis 5 years ago
I go for cartoon animals with devilish grins; that's my fetish, don't you judge me. One day, it will be legal for us to marry when the bigoted religious anti-pokemon conservatives are forced to progress socially.
Posted by Merda 5 years ago
First, how do you get sexy from what looks like a baked out lizard? Second, I know you didn't just try to diss Pac!
Posted by Kinesis 5 years ago
Snivy is way more sexy than the dude who's your avatar Merda.
Posted by headphonegut 5 years ago
I can't accept this :(
Posted by Kinesis 5 years ago
I hate that definition of morality. It tells you nothing. :/
2 votes have been placed for this debate. Showing 1 through 2 records.
Vote Placed by Ore_Ele 5 years ago
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:05 
Reasons for voting decision: Con sucessfully showed how Pro's argument both fails in showing that the robots are moral, and that humans are incapable of being moral. Though I think there were different arguments that Con could have made, he did a great job taking this apart.
Vote Placed by Cliff.Stamp 5 years ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:Vote Checkmark--1 point
Had better spelling and grammar:-Vote Checkmark-1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:34 
Reasons for voting decision: Tim had an interesting take on morality as defined by right conduct contrasting a designed being vs one assembled by random chance. However Kinesis did both throw up defeaters to Tim and argued for humans making choices, i.e., not being a printer to be essential to moral decisions. This was closed until the last round where Tim essentially repeated his opening and made a one line blurb about sentience. 4:3 for Kinesis.