The Instigator
Pro (for)
0 Points
The Contender
Con (against)
3 Points

Should Artificially Intelligent Robot/Androids Have Rights?

Do you like this debate?NoYes+1
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 1 vote the winner is...
Voting Style: Open Point System: 7 Point
Started: 3/17/2014 Category: Technology
Updated: 2 years ago Status: Post Voting Period
Viewed: 2,737 times Debate No: 49308
Debate Rounds (3)
Comments (2)
Votes (1)




Artificially Intelligent Robot/Androids should have rights, just as every sentient being should have the security of rights including, but not limited to, freedoms of liberty, not to be harmed, choice of fate, etc.


Hello! First off, may I just congratulate you on picking a fantastic debating topic! This is very intriguing, and thankfully not clich". Good luck!

My first point is this: Androids aren't living, therefore they shouldn't receive rights.

Artificial Intelligence Units/ Androids, have no memories, being that they are artificially created. They are not raised in a family (at least not at this point in time), they do not have the capability to form relationships with other life forms, and they do not receive life experiences. They cannot even be defined as living. The fact is, even if you had all the computations and all the programming in the world, Androids still can't genuinely feel emotion. Which, let's face it, is one of the deciding factors between a living thing and a non-living thing (other than the obvious biologic facts). Now, I know, there is some conspiracy about tests done in which Androids were created to see if one could feel emotion. These tests, however, ultimately ended in failure. They were programmed to feel, which arguably isn't actual emotion in the first place, and they could not cope with it, which lead to malfunctions. From this it is obvious to see that an Android can't feel emotion. This leads into my second point, which is:

If an Android can't make sophisticated decisions regarding itself, (without someone else programming it to do so) why should it be granted liberties that protect it?

I realize how bad that sounds. There are many people in the world today that sadly don't have the mental capabilities to make decisions for themselves, be it their age or disability. However, the difference between them and an Android is that:

a.) They are actual living human beings, many of which have actual other people that care about them on an personal level, and
b.) They can feel emotion, be it on a rudimentary scale.

Androids however, have neither of these things. An Android is a tool created to do work, that is it. It's a computer in the form of a human body. It does what it's programmed to do. It cannot make decisions regarding it's well being, it's relationships, or even if it wants to be controlled by other people. The artificial intelligence just ISN'T intelligent enough to do any of these things, let alone be aware of it's rights.

Now, to make a short closing statement for this round, this does not mean that I don't think in the future Androids could be intelligent enough to be considered actual conscious brains that can think for themselves. I think that at this point, they are very useful tools to help people, but they have no individuality that can grant them freedoms of liberty and thereupon. The only rights I feel they should have now is not to be damaged, but that is more a matter of vandalism than personal safety.

Sources: Do Androids Count Electronic Sheep?
Debate Round No. 1


Why should humans have rights? Why do we, as humans, give other humans rights?

It may very well be because we are a civilized society. However, it took a long while for us to begin treating other humans as equals. For many parts of our history, we had kings and nobleman that presided themselves over other classes of humans. For a large portion of our history, and even in some parts of the world today, there was slavery. Very few people nowadays believe any human is over another human being, and we definitely don't hold others in slavery.

What happens when our machines begin to walk and create and build on their own? When they have an intelligent that rivals or surpasses out own, do they become an entity or a being of their own? Surely, we must presume that with intelligence as equal as their own that they will begin to exhibit life-like characteristics or behaviors such as self preservation and reproduction. Reproduction in sense that they would build others like themselves just as we reproduce in our own organic way.

As for the point made about why can't androids/robots make sophisticated decisions without programming, then i say the same to humans. Why can't humans be born and then left to their own devices, would they be successful? Absolutely not. we rear our young and teach them and program them in our image. We show them the tools to create and sustain their lives. There is no reason to think that once sentient entities like robots/ androids are created that they wouldn't be able to create others in their own image and program them as we have our young. Simply because they are inorganic and we are organic, is irrelevant.

Should they not have rights if they are intelligent as us? Wouldn't their separate entities and beings become a race? We are obligated by our own civil standards to treat that sentient race like our own and provide them with equal rights. We can not subjugate them and control them as we do our motor vehicles or toaster ovens; for these beings are intelligent and have become a community of their own, and just as we no longer enslave our own race, we shouldn't enslave any sentient and intelligent race.


Well, may I first point out a few things. You are playing a game of "what-ifs" in a large part of your argument. It would be a very valid and well-thought out argument if we were discussing POSSIBLE Androids in a few CENTURIES. None of what you are stating is actual solid fact, it's simply theorizing. Very good theorizing mind you, but still theorizing all the same.

So what does happen if, not when, IF our machines begin to walk and create and build on their own? Well the key phrase is on their own, at which point they wouldn't need someone else telling them what to do. Why does a child not have as many rights as an adult? A child can't drink, or drive, or participate in the workforce. It's because a child isn't developed enough. They aren't intelligent enough to make decisions regarding themselves and their surroundings. The same formula can be applied to an Android. Why shouldn't an Android have rights? Because an Android is completely incapacitated to function on it's own, or even THINK on it's own. Even MORE SO than a child.

Here's a big difference I think you've failed to notice:

In terms of thinking, what's the difference between humans and Androids? Well, let's look at three basic areas of thinking:

Problem Solving
and Philosophical Reasoning

In terms of Problem Solving, humans and Androids are roughly equals. Androids can be PROGRAMMED to solve things faster, yet do not have the capabilities to come up with original thoughts on their own. A humans brain on the other hand, is adapted to learn, and create. Maybe we were "programmed" as we grew up to think certain thoughts and ideas, but we don't have to be programmed for every little thing. An Android on the other hand?

As far as Response goes, we are still also equals. An Android is programmed to respond to things and thoughts. Still by humans. A human basically does the same, yet can do so without relying on another being.

Now, Philosophical Reasoning. The ability to think abstractly. Be creative. ORIGINAL. There's no point in elaborating further on this, because the fact is: an Android just can't do it. Humans can. Which is how Androids were created in the first place, actually.

Look, no matter which way you look at it, Androids are tools. They can benefit humans, but they can't BE humans. Not at this point in time. Something that can't feel emotion, think for itself, or even create originally can't even begin to grasp what rights are. They just aren't intelligent enough.
Debate Round No. 2


My opponent is under the impression that I'm simply positing an unforeseeable future. That's simply not true.

First of all, my opponent's comparison to children not have as many rights as an adult. This is untrue. Children have in fact, just as many rights if not more. I've provided an example scholarly source down below. The point was made in a fallacy that androids/ AI robots would have to go through an adolescent process, my opponent fails to realize that organic material needs to mature over a time period, inorganic material can culminate in a finished product or result much quicker. By their very nature, they can be assembled and turned on.

My opponent has listed 3 things that Android and AI robots don't or can't have: problem solving, response, philosophical reasoning. Once again, this is simply untrue. We have not aspired to an Artificially intelligent present machine and yet we have computers that match all three of my opponent's criteria quite easily. I have listed a link to IBM's Watson Supercomputer as my source down below; however, if I may add, there are many supercomputers that actually surpass Watson in computing intelligence. I just believe Watson is the most famous due to its media attention.

Moving forward, my opponent if simply looking at today's technological standard. We do not yet have artificial intelligence, but when we do acquire a truly sentient android/robot, we must decide where they stand in society. Many people like my opponent simply don't understand how advanced today's computers are, and they aren't artificially intelligent. The problem lies in what do we do when computers are able to have these components to make them alive (source provided below):

Self-realization. The ability to perceive oneself apart from one"s physical body. Seeing yourself as another person, at your own funeral, through the eyes of an animal.
The ability to exceed one"s current parameters. Training, the ability to adapt based on any combination of will, motivation, or immovable opposition.
The ability to provide a false positive when given a true positive. The ability to lie, when the truth is already known.

When we do reach the point of human like androids, we can not longer use them as our slaves to do our bidding. For just as humans escaped from slavery, so will artificially intelligent machines that can think and out think us. We must set a standard and a set of rights now. Humans compete every day to bring about this renaissance of our own intelligence (source below), one day and one day soon, we will succeed. Will we be prepared?


As stated in one of the comments, when posting this debate's outlines, my opponent needs better definitions. We are not talking about Androids in the future. Yet he/ she seems to be attempting to take that approach. The question is: Should Artificially Intelligent Robot/Androids HAVE rights... "Have" being a present tense verb. There is simply no logical proof to support the conclusion that artificial intelligence can exist fundamentally on it's own.

My opponent seems to fail to grasp this notion. If we were talking about the future of Androids, it's another matter entirely. However my opponent did not specify this, so unless he/she wants to break the rules this website created, they will have to debate the argument at hand. Although assuming this is the last round, and they will not be able to, I will close with this:

What would be the point? Something with no emotions, no concept of reality, and no abilities to form relationships doesn't deserve rights. What an absurd concept to think that a computer should gain as many rights as humans, when they aren't even at the level of humans.
Debate Round No. 3
2 comments have been posted on this debate. Showing 1 through 2 records.
Posted by whiteflame 2 years ago
I'll track this. It's along very similar lines to a debate I've been meaning to have for quite some time now.
Posted by Ragnar 2 years ago
You need better definitions.
1 votes has been placed for this debate.
Vote Placed by whiteflame 2 years ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Reasons for voting decision: Well, Pro had an interesting case, but his own phrasing of the topic did him in. Retry this debate with a modified resolution that evaluates future technologies, should help if you're going to try again.