A.I. of sufficient intelligence deserves Personhood
Debate Rounds (5)
Basis & Definitions: We will presume that the entity's nature itself does not matter. The question is essentially whether a being that functions as a human would fully entitled to emotions, mental capabilities, and intelligence should be considered a "Person". The standard of machinery still is present so the object still requires initial input unlike biological systems but beyond that input can assess within it's parameters which is assumed to be equivalent to a human's.
Bans&Diversions: No other species may be referenced besides humans. No other form of example (such as movies or books) may be referenced. If you choose a definition for personhood you must stick with it throughout the entire debate.
Special Notes: Due to the science-fiction nature of this particular debate you are not required to use any citations backing current laws or ethics (though you must for any definitions you wish to assert) however you must give sound reasoning for your arguments.
A) I assert that personhood is a human construct that can only accurately apply to humans no matter what the definition is because most of the rights that are contained therein deal with human scenarios and situations whether it be physical, emotional, spiritual, or mental care and I further assert that an A.I. even with equivalent capacities would not be able to really garner much from the human-centered personhood.
B) I assert that personhood would be useless to the A.I. itself because the A.I. functions in a manner that is different enough from humans relating to their perspective that doesn't really satisfy their needs. To that end I would state that there would be an alternate form of rights for the A.I. that properly protects and provides for it versus adopting personhood.
While I am not sure how the A.I. would contribute to human law and ethics as a participant in human personhood I can affirm that there would be very little benefit to the A.I.'s rights themselves. If the rights were extended and then the A.I. was to further develop them as they were subject to them I suppose I can see where that would be a good point but I fail to see how the merger would make sense for the A.I. or the human since the perspectives of both entities would be drastically different. The A.I. not being biological for instance does not have to worry probably about things like reproductive rights or bodily integrity, care, and management which is a majority of ethics revolving around human healthcare. It is not that humans do not go to great lengths to protect themselves but rather how we protect ourselves won't help the A.I.; what is "property destruction" for an A.I.? What exactly do they own? How would bodily integrity for humans apply to the A.I.? Would it be protection for it's case or shell whether it is an android or seated in a mainframe or stationary computer? How would these things be enforced as well and how would they be protected; harming a human is very different from harming A.I. as generally speaking you can harm an A.I. by hacking it while you cannot hack a human in the same fashion.
I feel that the parallels between the two systems and two entities themselves are too few to really set up a sound basis.
My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life formsA293;computer-based intelligences, for exampleA293;yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively humanA293;such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture. How will, and how should, constitutional law meet these challenges?
By: James Boyle
1. The A.I. in this scenario has emotions so they can and likely will have the problem of bias however the bias may differ due to inability to relate. For instance grievous crimes like sexual assault don't happen to A.I. so while they may know the law they may or may not be necessarily geared towards even liking humans enough to want to uphold human justice. With prejudice being a possibility due to emotions it's difficult to say that there would be no faulty judgment or that the judgment would be necessarily clearer.
2. Human law and legal matters and decisions are very emotional. While "cold, hard logic" sounds like the way to go in reality many times we grant pardons based not on what is done but how the individual seems to be reacting to what is done and based on their history and the likelihood of recidivism. For instance you would not lock up a teen / yound adult for the maximum sentence if he committed petty theft, showed remorse, and really had no criminal record, and the same with traffic laws relating to who gets a warning versus who gets a ticket, but on the other hand if you've a habitual criminal or a grievous unremorseful crime on your hands you treat it completely differently. This is imperative for human law. If we judged solely based on the action alone a lot more people would have a lot more prison sentences and longer prison sentences as well I believe.
3. There is still no explicit benefit to the A.I.; in actuality you're suggesting we utilize the computer as a tool which would almost be counterintuitive relating to personhood and the freedoms that come with it. Instead of a citizen this strongly suggests "Put a robot on the bench, it will help clear up the murkiness of human failings" which I feel is not the same as expecting insight from an equivalent being. If nothing else it is an objectification of the entity and I feel that as an aware thinking, feeling being it would know this. Whether or not it would take pride in this is the same with humans; some of us take pride in our ability and uses and others don't want to be used for human notions.
As for the quotation I feel that it just asks the question we're asking now. I am not sure what it was supposed to add unless you wanted to cite something in the full paper but otherwise it's not actually a "reason" or "argument" it's just a list of statements, hypothetical propositions, and questions.
In philosophy, the word "person" may refer to various concepts. According to the "naturalist" epistemological tradition, from Descartes through Locke and Hume, the term may designate any human (or d59;non-humand59;) agent which: possesses continuous consciousness over time; and who is therefore capable of framing representations about the world, formulating plans and acting on them.
blackkid forfeited this round.
An A.I. of sufficient intelligence does deserve personhood in specific cases. The reason for this is that while the A.I. may be man-made, who is to say that it does not develop a life and mind of it"s own. Why do we keep the rights of personhood to our selves when perhaps the A.I. has developed a sense of humanity and a yearning for a personal relationship with people, but because we refuse to give it the right to express these feelings it must keep them hidden? The only way to answer these questions is to move forward and give the A.I. what it has long been waiting for, the right to the title of personhood.
1 votes has been placed for this debate.
Vote Placed by whiteflame 2 years ago
|Agreed with before the debate:||-||-||0 points|
|Agreed with after the debate:||-||-||0 points|
|Who had better conduct:||-||-||1 point|
|Had better spelling and grammar:||-||-||1 point|
|Made more convincing arguments:||-||-||3 points|
|Used the most reliable sources:||-||-||2 points|
|Total points awarded:||3||0|
Reasons for voting decision: Given in comments.
You are not eligible to vote on this debate
This debate has been configured to only allow voters who meet the requirements set by the debaters. This debate either has an Elo score requirement or is to be voted on by a select panel of judges.