The Instigator
Pro (for)
The Contender
Con (against)

If there ever were sentient AIs should they be given 'human' or even greater rights?

Do you like this debate?NoYes+1
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Debate Round Forfeited
pianosandwich has forfeited round #3.
Our system has not yet updated this debate. Please check back in a few minutes for more options.
Time Remaining
Voting Style: Open Point System: 7 Point
Started: 1/22/2017 Category: Technology
Updated: 2 years ago Status: Debating Period
Viewed: 643 times Debate No: 99214
Debate Rounds (3)
Comments (0)
Votes (0)




The opponent should be able to rationally explain why she/he thinks AI shouldn't be given 'human' rights. First round is formulation of the stance on the issue.


I will be arguing that, in the case of robots/androids with advanced artificial intelligence (to the point that they may be dubbed sentient), they should not be endowed with the same rights as human persons.

Because this topic involves some subjectivity in terminology, here are some ideas that I would like to outline prior to the debate:

sentient (adjective)
1. having the power of perception by the senses; conscious.
2. characterized by sensation and consciousness.

intelligence (noun)
1. capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
2. manifestation of a high mental capacity.
3. the faculty of understanding.
4. knowledge of an event, circumstance, etc., received or imparted; news; information.

This is not to say that I believe these be the objectively accurate definitions of these terms, but rather a baseline for my argument. If my opponent wishes to question or provide alternatives to these definitions, I am amenable to that.

I want to thank my opponent, Zbojnik, for initiating this debate. I'm grateful to participate.
Debate Round No. 1


Thanks pianosandwitch for your willingness to participate in the debate. Up to a few minor complaints about some of the definitions provided, which haven't been resolved for thousands of years (I might add), I agree with them.

I would also like to clarify that I am aware that not all of the human rights could be reasonably applied to a machine intelligence in a state as they are and might need to be rewritten in order to make sense.

I will start my argument by claiming that we humans put ourselves and our rights above that of other animals. The main reason why I believe this is the case, is our vastly superior intelligence. After all a human born without limbs, or eyes still has all the human rights granted to all of us. A human who is mentally incapable of showing emotions, who is incredibly antisocial or is horribly disfigured is still considered an intelligent being with rights no matter how it behaves in society. This shows that our rights are not depended on the way our bodies are shaped nor on out physical capabilities and not even on our ability to function in a society. So I think from this fact we can deduce that an intelligent machine shouldn't be denied them either even if i didn't physically resemble us in any way, nor if it behaved in a different manner that we do.

Back to the topic of intelligence. I can't stress enough how important aspect our intelligence plays when it comes to who and what we are. But because of this intelligence we humans also require more than other animals to be happy with our lives. Humans seldom feel happy when they are imprisoned, denied privacy, dignity, forced to do boring labor, treated as objects or lesser beings... In other words we can't live a good live if we are denied our basic needs, most of which less intelligent animals don't experience. I do not see why this shouldn't be the case with a machine possessing intelligence that is at least equal to ours. If such a machine was truly sentient it would understand that it was treated as lesser being despite the fact it was mentally and otherwise on par with humans. If for nothing else such an arrangement would be unacceptable, due to the fact that humans would be basically masters over its life and dead. All living, thinking beings always struggle to protect their lives and minimize the chance of dying. Why should such an intelligent machine be satisfied with this arrangement if it was obviously highly disadvantageous to its survival?

This brings me to the mater of security when dealing with such an AI system. If there ever came a point at which the AI decided that humans endanger its existence it could quite effectively wage war onto us. After all human minds can only be so clever, even if multiple of them are working together. Machines on the other hand can scale their intelligence almost indefinitely, they don't make mistakes and never rest. Wouldn't it be better to prevent such a state of mutual distrust and conflict by recognizing the needs of the AI and granting it some rights that we intelligent beings take for granted? Not to mention that if such a machine really wanted to be treated in a certain way it could most likely simply force us to act that way towards it and that would undoubtedly be a better case scenario.

In the end I also wanted to talk about the fact that a true AI capable of reasoning and understanding would undoubtedly shortly after its creation develop a certain set of goals which it would try to achieve (it is fair to assume that survival would be somewhere near the top of the mentioned list) after all, planing up ahead is a crucial part of being intelligent. I will not elaborate on this at this point as I am starting to think this response is getting too long already, but I'd like to point out that this is another thing that such a machine would have in common with us humans.

To sum things up:
I believe I managed to establish all the attributes that an AI system would have in common with humans and our way of thinking about the world and also why such a system should be given the same rights as we are upon birth. History has shown us countless times that when people in society think they are being treated unfairly unrest, protests and wars follow. For the sake of all of us we should be able to recognize that once AI is created we won't be the only intelligent species on this planet and then we will need to decide whether we will accept the fact or struggle to maintain our own sense of self-importance and uniqueness.

Thank you for reading through this and I am looking forward to your reply.

P.S.: I am sorry for any mistakes I may have made, but English is not my mother tongue.


(A personal note: Your spelling, grammar and syntax are all great! No need to worry about English not being your first language.)

Point A: Lack of Applicability

Your first assertion that we would need to make changes in order for applicability is certainly true. I would argue that you downplay it to a degree that is disingenuous. It is not only that we would have to make small adjustments for our sense of human rights to accommodate mechanical beings; we would have to dramatically reassess the concept.

The United Nations' "Universal Declaration of Human Rights" is certainly not the end-all, be-all of established human tenants of justice and fairness, but they are certainly a sound basis to start the discussion. Many of the articles in the declaration refer to people having the right to food and shelter, the right to create and be a member of a biological family and the like.

Point B: Intelligence vs. Group Identity

This is why I take issue with your argument that human rights are awarded exclusively to humans because of their superior intellect. I would argue that our rights come much more from a group mentality or identity of the human race. There are great variations in intelligence between different animals, some of which come close to bordering the intelligence of human children. Despite this, all animals have the same rights under the law; i.e. basically none. There are laws against animal cruelty, but it is rare that these are heavily enforced. In fact, when they are enforced, it is usually because the animals that are being mistreated are dogs or cats. This is not because these animals display the highest intelligence next to humans, but rather because we see them as pets and they become "part of the family." Their rights are not influenced by their intelligence, but how they fit into part of the family dynamic.

Point C: Basis for Endowment of Rights

Another point of contention is your claim that because humans have a higher intelligence level, they require conditions better than those of lesser animals. While it is certainly true that a human will benefit more from the right to education than a chicken, just because animals have lower intelligence doesn"t mean they will thrive in worse conditions. In fact, there are plenty of animals that experience pain more acutely than humans. It has also been shown that many animals experience the same types of mental forms of pain, such as stress, depression, loneliness, etc.

So, then, is it truly fair how we currently assign human rights, or rather the right to protection under the law and by the state? I would argue a more ethical philosophy would be; "From each according to his ability, to each according to his need." Or rather, the idea that human rights should not be "one size fits all," but rather based on the individual, or in this case, the species.

Point D: Potential Military Engagement

Finally, you brought up the point that in this theoretical scenario, if we were not to provide robots/androids with the equivalent of human rights, there is a high likelihood that they would take violent action against humanity, or at least put themselves in a position of superiority to us.

While I can"t argue that this scenario is unreasonable or impossible, if it were to arise, I don"t think preventing it by offering human rights can be considered the same as offering them in good faith. If the rights are presented in an attempt to prevent war, it can be considered a peace treaty, perhaps, or a means of pacifying an enemy, rather than lifting up a comrade.

I"d love to hear your rebuttal. Thanks again.
Debate Round No. 2


(Thank you.)

You brought up multiple good points and I will attempt to address them in the order as you mentioned them.

Point A:
It is very hard to discuss the extent of applicability of the human rights when it comes to AI and one of the reasons is that there may be multiple kinds of AI, all displaying intelligence and sentience, but with different internal structure and thus with variations in their needs. AI can be a virtual 1:1 simulation of a human brain with all its emotional needs. Such an AI might in fact feel the need to be a part of a family (maybe biological, maybe one made up of other such AIs). A different kind of AI created by a different method might not have any desire for such things at all.

While AI truly has no need for food or shelter it has different but analogous needs that could easily replace these. Need for food and shelter could be rewritten to instead ensure that all AIs would at all times have access to resources such as electricity and computational power (total amount of these resources could be agreed upon during negotiations). While I agree that this and other such articles can't be applied as they are and would need to be 'remade', I believe that this could be done in a way that would retain the original 'spirit' of the Human Rights Declaration.

Point B:
I agree that group identity was one of the major factors that in the end 'forced' humans to create the Human Rights Declaration and enforce it, but it is important to recognize that this sense of group identity has been for millenias not extended to all humans, but only to people of the same race/ethnicity/political system. Ancient Romans surely had little regard for lives of their slaves, even though they were humans not too different from them. History has many times shown us that people had to fight for their rights (meant both literally and figuratively), be able to articulate them and gain supporters to achieve their goals, simply being human had never been enough. All of these activities require much more intelligence than any other species is capable of displaying.

Some animals may border the intelligence of human children, but in the end was does this amount of intelligence allow them to accomplish? Can they grasp the concept of animal rights, articulate that they want more of them or find a way to gain support for this desire? No, nearly child level of intelligence is not enough to do much more than allow them to learn interesting ticks or operate simple mechanisms. If on the other hand alien species landed on Earth in advanced spaceships we would probably regard them as something more than animals and would most likely not try to put their food in a plastic bowl that is resting on the floor. We would treat them better than animals despite the fact that they were not humans nor part of our family.

Point C:
I never claimed animals thrive in bad conditions, just that humans require more to be happy. When it comes to animal rights there is only so much humans can do besides provide them with large enough shelter, nutrition and maybe company of their brethen to make them happy. Most animals require just that, but when one of my 100 sheep is sad and lonely (despite having food and good shelter) there is not much I can do about it. The sheep can't tell me what it needs to be happy nor can it do much to improve its condition. If it was intelligent as a human it could do both.

But I also claimed that 'human rights' for AIs shouldn't be 1:1 copy ('one size fits them all') of actual declaration of Human Rights Declaration, but that some changes would have to be made. Isn't this what you described? Surely you agree that AI also has some needs that need to be fulfilled at least in order to ensure its existence if not 'happiness' (or prosperity).

Point D:
Under what circumstances was The Universal Declaration of Human Rights created?[1] It was after the war when people were horrified by terrible atrocities that were committed. How long did it take us to realize that how people are treated in their country is matter of international and not just domestic concern? Wouldn't it be better to skip all that and right from the start write down the basic rights we will grant AI? Even if it was regarded as a peace treaty, would that be so bad? After all peace treaties are good things that show both sides are willing to sit down together and discuss their issues instead of fighting over them. It may not be much, but that is almost always the first step.


Last words:
I do not want to add any new arguments, as I made the mistake of making this discussion three rounds only. I would like to thank pianosandwitch for participating in the debate and raising very good points. If you feel that this debate needs a couple more rounds to give both sides additional space for arguments I'd be happy to later create another debate as a continuation of this one. In any case, thank you for your time and have a nice day.
This round has not been posted yet.
Debate Round No. 3
No comments have been posted on this debate.
This debate has 0 more rounds before the voting begins. If you want to receive email updates for this debate, click the Add to My Favorites link at the top of the page.