The Instigator
bsh1
Con (against)
Winning
28 Points
The Contender
Wylted
Pro (for)
Losing
0 Points

Artifical Intelligence and Rights: Can Androids have Rights?

Do you like this debate?NoYes+3
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 4 votes the winner is...
bsh1
Voting Style: Open with Elo Restrictions Point System: Select Winner
Started: 12/6/2014 Category: TV
Updated: 2 years ago Status: Post Voting Period
Viewed: 5,631 times Debate No: 66471
Debate Rounds (5)
Comments (130)
Votes (4)

 

bsh1

Con

Preface

This debate is, on the surface, appears to be a Star Trek debate. However, it delves into issues worthy of any upper level philosophy or ethics class: can artificial intelligences be consider persons, or have rights? I am truly looking forward to an interesting moral discussion regarding this question, and I hope for an excellent debate!

The response time for arguments on this debate is 48 hours. You need to have completed 3 debates to be able to accept this debate--if you want to accept and haven't met this requirement, post in the comments. You must have at least 2,000 ELO to vote on this debate.

Full Topic

Judge Advocate General Louvois's ruling, that Lt. Cmdr. Data had "a right to choose," was wrong.

Context

This debate is about events that transpired in the Star Trek: TNG episode "The Measure of a Man [http://en.wikipedia.org...(Star_Trek:_The_Next_Generation)]

Rules

1. No forfeits
2. Any citations or foot/endnotes must be provided in the text of the debate
3. No new arguments in the final round
4. Maintain a civil and decorous atmosphere
5. No trolling or semantics; "wrong" in this case is getting at the veracity of the verdict
6. Pro accepts all definitions and waives his/her right to add definitions
7. BOP is shared: Pro must show that the ruling was wrong; Con must show that the ruling was right
8. Pro must post their case in Round One
9. Violation of any of these rules or of any of the R1 set-up merits a loss

Structure

R1. Pro's Constructive Case
R2. Con's Constructive Case, Pro rebuts Con's Case
R3. Con rebuts Pro's Case, Pro defends Pro's Case
R4. Con defends Con's Case, Pro rebuts Con's Case and Crystallizes
R5. Con rebuts Pro's Case and Crystallizes

Thanks...

...to whomever accepts. I sincerely hope that this will be an informative, enlightening, and--of course--nerdy discussion.

Wylted

Pro

I look forward to this interesting exchange and am delighted that I accidentally misread the resolution and took the wrong side. This is a situation that somewhat resembles the one that took place in the show.

This is also an important topic to discuss as hopefully one day man will have to decided if an android should have rights.

RIGHTS

Who should have rights, is the question we will be exploring. Data shouldn't have rights. It is no different than an advanced version of Siri. It has no consciousness, or understanding. It can't feel or love and it certainly isn't capable of being self aware.

THE CHINESE ROOM EXPERIMENT

A thought experiment developed by John Searle in 1980, shows why an android like Data can't have a mind like humans. They can't understand or have consciousness. In a real court trial like the one Data faced it's almost certain the Chinese room thought experiment would be brought up.

This experiment assumes you don't know Chinese. Imagine that you are locked in a room with a book that shows a bunch of Chinese symbols that are questions with the relevant Chinese responses.

While locked in this room a Chinese speaker will hand you an index card with some Chinese symbols asking a question and you will refer to the book to pass back the appropriate response.

To the person passing the card to you, it will look as if you understand Chinese, however you clearly don't.

This is similar to the process of a machine. No matter how much intelligence a machine can duplicate it can't actually understand. It's just a set of programmed responses. Machines simulate intelligence but they can't have intelligence.

Not that it would never be possible but in the Star Trek universe they haven't reached that level of advancement in computer technology.

INFORMATION PROCESSING

I want to take a moment to cover why machines process information differently than humans. A machine operates on symbols. It uses a bunch of zeros and ones and follows hard formulas. It's not actually thinking or understanding anything and is merely responding to formulas.

It operates with symbols in a different than humans. The computer uses the symbols in a syntactic way following a formula and having no understanding (like the Chinese room experiment).

Humans use a more semantic approach. We don't use symbols according to any formula, we understand the symbols we see.

Machines don't have this ability and probably never will. Asking a machine to reproduce this is similar to asking a machine to perform photosynthesis. Which brings me to my next point.

CONSCIOUSNESS IS A BIOLOGICAL PROCESS

Consciousness is a biological process like digestion. It just can't be duplicated by a machine. A machine can give the outward appearance of understanding like in The Chinese Room experiment but it just fails to understand.

If we just look at the brain from a common sense perspective we can see that consciousness is the irreducible state of a bunch of micro processes of the brain which is not yet understood by science. This is similar to how water is irreducible and a component of several different atomic structures.

We don't know how the brain creates consciousness in a way we can replicate, just like we don't know how to create an embryo from non biological processes.

Science really hasn't figured out how consciousness is derived from the micro processes of the brain but it's most certainly a biological process derived from micro systems within the brain.

You don't create a consciousness with a computer program.

Miguel Nicolelis, A neuro scientist at Duke university says

“The brain is not computable and no engineering can reproduce it,”

"human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, nonlinear interactions among billions of cells"

http://www.technologyreview.com...

The best they can do and what was done in Data's case is make a simulation that does a dam good job at being convincing to an outside observer. (again, see Chinese Room Experiment)

You see a computer contains 1 or maybe a few hundred processors like in Data's case so they can't really achieve consciousness like a human can.

Humans in comparison have literally billions of microprocessors in their brain and how they combine to create consciousness is unknown. (Crevier, D. 1993. [AI: The tumultuous history of the search for artificial

intelligence]. New York: Basic Books)


You can see hints that Data doesn't have a billion processors working in his head in this particular episode as well. He can do all types of long calculations to determine if his poker hand gives him a good chance of winning but can't conceive of the point of the game, which is deception.

ANDROIDS DON'T FEEL

Until an android or computer can feel, it can't have consciousness. When a person feels happy, sad or angry they feel it in their whole body. Anger causes your blood pressure to rise your arms and legs to feel extremely tense.

Machines will never be able to reproduce these feelings. This process is a chemical reaction in your brain that floods out to your body. In the show, you can see Data has his arm removed in the trial and feels absolutely nothing.

Sure Data can appear to have emotion like he was programmed to do but can he actually "feel" emotion. Can he feel a chill go up his spine when he's scared? Can he feel a lightness in his chest when he falls in love?

Better yet can Data dream? What makes us human is are multiple levels of focus and awareness. Is Data ever distracted by thoughts for a loved one when he is supposed to be focused on an important task? Does he ever lose himself in thought or daydream?

He doesn't do any of those things. He just analyzes and reacts to what's in front of him. He can pull data relevant to what's in front of him but he is incapable of abstract thought. He is incapable of consciousness.

DIFFERENT TYPE OF CONSCIOUSNESS

It could be argued that Data is conscious but it's a different type of consciousness. To define consciousness in this way would be a problem because it's a little dishonest. It would be like saying vanilla is a different type of chocolate and goes back to our innate need to humanize everything. It's just like our inclination to see the shapes of human figures in the dark only to cut on the light and see a coat rack.

Consciousness is a human concept and can only be viewed from that angle. We call animals conscious because we can understand it from a human view and see that as being of resemblance to our own consciousness. Dismissing my arguments by calling AI a different type of consciousness just won't work.

Saying consciousness from artificial intelligence is different than human consciousness in an unrecognizable way is just making a nonsensical statement that carries no meaning.

CONCLUSION

Riker put up a bad argument for his side. If it was me instead of Riker, Data would have been chopped up and experimented on. Judge Advocate General Louvois's ruling was incorrect and she admits herself that she is unqualified to make a ruling.

Data's only argument for having rights is his argument that he is conscious and since I've proved he is not. He should have no rights.
Debate Round No. 1
bsh1

Con

Thanks to Wylted for this debate! I want to explore the issue of whether the Court's ruling was wrong in three different ways--the first is to examine the actual cases put before the court, the second is to examine the origins and purpose of moral rights, and the third is to discuss what kinds of entities can have legal rights.

C1. THE CASE BEFORE THE COURT

When we ask a question such has "did the court decide wrongly," we are asking a question in the context of the court's proceedings. Decisions and rulings must always be understood in the context in which they were made. If the President is making a decision, and all of the evidence suggests that he should take action X, he is not wrong to choose that course of action, because the information available at the time was such that taking action X was the most reasonable thing to do--even if action X turned out to be ineffective after it was undertaken. Similarly, it is not wrong for me to decide to buy a gift that the information I have suggests that my friend will enjoy, even if, unbeknownst to me, my friend already has this item and doesn't need my gift. In fact, one could even say it would be wrong for the President or myself to make a decision we didn't believe to be the right one, even if it turned out for the best. This is because our choice would go against the known facts in our arsenal, and so it would be irresponsible of us to not make the decision we believed to be best.

Decisions are necessarily rooted in the moment, and correct decisions conform to the known facts. Their veracity is best understood in context. The court only has so much information at its disposal, and so it is no different from the forgoing examples. So, the question that needs to be asked is whether Judge Advocate General (JAG) Louvois' ruling was correct given what information was presented to her. I think that Pro put it best when he said that: "Riker put up a bad argument for his side." I would agree, and the given the two arguments presented, Picard's and Riker's, Picard's was the one that made the most compelling case. Picard's interrogation of Cmdr. Maddox seemed to establish, in the court space, that Data was self-aware and intelligent. Picard's earlier humanization of Data as well as Maddox's own hesitation cast doubt onto whether Data was conscious. [1] Given Maddox's own, self-defeating testimony, JAG Louvois's ruling was not wrong, but rather was the only decision she could have made.

The following are "even if" arguments, implying that even if you don't buy into this argument, that these subsequent assertions should allow you to vote Con.

C2. MORAL RIGHTS

I think it is a false question to ask whether Data is equal to humans--that is not what is at issue here. Rather, the question is specifically whether Data has a right to choose. That is a fairly narrow inquiry, and responding in a overbroad fashion risks us losing sight of what this debate is about. That isn't to say, however, that we shouldn't lay the groundwork for a theoretical approach to right-holders so that we can answer the more specific query about Data. What I mean here is that we first have to understand how rights are assigned to understand whether Data should be assigned any, let alone the right to choose.

First of all, I would point that Data does not need "a mind like humans" to have rights. One can easily contend that animals are entitled to rights of varying degrees, but yet they do not think or process information as we do. There are subtle, yet relevant differences. One of the most commonly posited arguments for animal and human moral worth is sentience, defined as "the ability to feel, perceive, or experience subjectively." [2] Cmdr. Maddox gives us, in his testimony, three criteria by which we can assess Data's sentience: (1) self-awareness, (2) intelligence, and (3) consciousness. I would add to this list sentimentality, because sentience often includes, "the capacity to experience episodes of positively or negatively valenced awareness." [3] In other words, one must not merely be aware, but one must be able to color that awareness in positive or negative lights. Con implicitly agrees to the criterion of sentience Maddox advances because all of his remarks regard Data's self-awareness, consciousness, and ability to feel. Therefore, it is reasonable to say that if Data meets the criteria of sentience, that Con's case falls.

Moreover, sentience is relevant insofar as it provides a reasonable basis for grounding moral rights. By this I mean that sentience is of more relevance--if we grant that suffering is wrong, something that seems intuitively obvious and is enshrined in many of our moral values, then we must grant that sentient beings have some kind of moral status because they are able to suffer. With the relevance of sentience established, I will now review the four criteria I laid out for it, and why Data meets each one.

SC1. Self-Awareness

Self-awareness is the ability to engage "introspection and the ability to recognize oneself as an individual separate from the environment and other individuals." [4] Clearly, this is something that Data is able to do. To provide a transcript of the court proceedings:

Picard: "Data, what are you doing now?"
Data: "I am taking part in a legal hearing to determine my rights and status: am I a person or property."
Picard: "And what's at stake?"
Data: "My right to choose--perhaps my very life."


Data is certainly able to identify himself apart from others--he knows who he is, and what he is doing at a given time, as well as how decisions can impact him as an individual. Data is also able to critically examine his own thoughts and ideas, and to reflect on his goals and priorities, demonstrating a power for introspection. Consequently, it seems that Data is indeed self-aware.

SC2. Intelligence

Data's intelligence is certainly undeniable. His processing ability alone is incredible. Moreover, Data is capable of learning and accruing new information. Even modern-day computers are able to learn: "The possibility of machine learning is implicit in computer programs' abilities to self-modify and various means of realizing that ability continue to be developed. Types of machine learning techniques include decision tree learning, ensemble learning, current-best-hypothesis learning, explanation-based learning, Inductive Logic Programming (ILP), Bayesian statistical learning, instance-based learning, reinforcement learning, and neural networks." [5]

SC3. Consciousness

What is it to be conscious? Thomas Nagel argues that "a being is conscious just if there is 'something that it is like' to be that creature, i.e., some subjective way the world seems or appears from the creature's mental or experiential point of view. In Nagel's example, bats are conscious because there is something that it is like for a bat to experience its world through its echo-locatory senses, even though we humans from our human point of view can not emphatically understand what such a mode of consciousness is like from the bat's own point of view." [6] While a bat is conscious, a rock is not--a rock doesn't perceive anything, and so it is not possible to imagine the world from the rock's point of view. Nagel argues then, that when we ask the question, "is X conscious," what we are really trying to understand is if X is perceptive.

Surely, Data meets the criterion Nagal sets out. The standard, referred to as the "what is it like" standard, is not irrational to apply to an android. I can reasonably inquire as to what it is like to be Data insofar as I can experience the world from his unique perspective. That makes him conscious.

SC4. Sentimentality

Sentiment is defined as "an attitude, thought, or judgment prompted by feeling." I will attempt to show through four examples how Data meets this definition. All of these importantly intimate feelings or value judgments that require sentimentality.

1. "Following Tasha Yar's death in 2364, Data was puzzled about her death, thinking not about Yar but rather how he would feel in her absence." [7]
2. Data keeps trinkets, such as medals and a image of Lt. Yar, that serve no rational purpose, but that he just "wanted" to retain [1]
3. Data rejected an offer by the Borg that would have allowed him to have experienced pain--a step closer to making him human. [7]
4. "Data claimed that he did not only perceive data and facts, but also the...ineffable qualities of the experience, which would be lost when downloaded to a conventional computer." [7]

C3. LEGAL RIGHTS

Legal rights do not require sentience to be bestowed on someone, insofar as things (e.g. corporations) can have them. It seems that two of the main arguments for who can hold legal rights center around the ideas that as long as the right-holder benefits from the right and/or the right-holder can exercise the right, that it can have legal rights. [8] That I have an interest in my well-being and physical integrity, and that I can exercise my claim/interest or enforce it through a variety of legal machinery means that I have a right. If Data (a) can benefit from a legal protection, and (c) can exercise that right or seek its enforcement, then Louvois's ruling is not wrong, because she is upholding and protecting a fundamental interest of Data's.

Data is capable of exercising rights through action. He can benefit from rights in the sense that he can continue to exist and act as he chooses. So, from a legal perspective, Data can be afforded a "right to choose."

SOURCES

1 - Star Trek: TNG - "The Measure of a Man"
2 - http://en.wikipedia.org...
3 - http://www.iep.utm.edu...
4 - http://en.wikipedia.org...
5 - http://www.iep.utm.edu...
6 - http://plato.stanford.edu...
7 - http://en.memory-alpha.org...
8 - http://plato.stanford.edu...

Thus, I negate. Over to Pro...
Wylted

Pro

THE CASE BEFORE THE COURT

This is certainly not a good argument to make in my opinion and for a few reasons. The Star Trek show is really time compressed and you have probably 3 days worth of events squeezed into 45 minutes of show and that's including the credits and theme song. I'd say we're looking at a trial that could of been anything to a few hours and up to a few days. The show allotted probably less than 10 minutes to this trial.

Let's say for the sake of argument the trial was actually less than 10 minutes long. Picard had a huge burden of proof to make with in that 10 minutes. I find it no coincidence that Judge Advocate General Louvois's ruling was in favor of a guy she frequently sexually harassed. Especially when you consider how emotionally invested Picard was in the fate of this android.

I'll get more to this in a minute but if you looked at how much evidence Picard brought to the table I could have proved my SIRI is an intelligent machine.

I want people to get a good feel for what type of burden of proof was on Picard's shoulders. He had to prove a machine had the same rights as a biological entity. Something which has never been done before. This ruling was set to make waves.

Data was designed to look as well as act human. It was meant to be a convincing simulation. In order for Picard to win this case he should have had to prove more than just the mere fact Data looked and acted human. Of course he looked and acted human, he was "born (made) that way".

However, looking and acting like a sentient life form, doesn't make you a sentient life form. Picard failed to show anything beyond the fact that Data looked and acted sentient. The only way for Picard to meet his BOP, was to make some deep philosophical arguments which he neglected to do.

The trial was about whether Data was Starfleet property and it had already been established before the trial that Data was in fact property of Starfleet. Picard and commander Maddox had agreed that Data would have to meet the 3 attributes of sentience to in fact reverse the original ruling. intelligence, self awareness and consciousness. Picard made a weak argument for self awareness and intelligence was dropped by Maddox. However the argument that Picard needed to make, especially considering this ruling was to overturn a previous one that was diligently researched is one of consciousness, but instead of Picard arguing that Data in fact possessed consciousness he just asked the question "What If?".

Asking a rhetorical question isn't the same as presenting evidence or an argument. Data only met 2 of the 3 attributes of sentience (which is what was needed to prove he wasn't property). My Siri meets 2 of those 3 requirements if the only evidence presented is me asking her questions and her answering in a way that implies she is self aware.

Wylted: "How are you doing, Siri?

Siri: "Very well, thank you!"


Siri is self aware, she can tell me how she is doing. Is she intelligent? Well let me see, I do believe she knows dang near everything.

Wylted: "What is the square root of 352?"

Siri: "The answer is approximately 18.7617."


I'd say she is intelligent. She answered that question almost immediately, as she will to whatever I ask her.

At this point after parading Siri into court Picard would ask "What if Siri is conscious in even the slightest degree?". To which The JAG officer would answer in the same way she did with Data's ruling.

"I'm not qualified to make this ruling but yes Siri is sentient and now possesses rights."

Now where is Whoopi Goldberg to tell me I'm a dik for enslaving my cell phone?

SELF PRESERVATION

It's important to realize that Data was likely programmed with instincts for self preservation and also to recognize and conform to social norms in order to have a less offensive presence. Data needed to appear as if he had feelings as if he was human to survive. If it weren't for those little trinkets, it's likely that Data would've been dismantled.

It doesn't surprise me at all and it shouldn't surprise anyone else that a machine meant to live and work along side humans would be programmed to be less scary, especially one that is the first of his kind. I'm not a big Star Trek fan, but I would expect after that episode Data would become progressively more human like to better insure his own survival.

MORAL RIGHTS

"First of all, I would point that Data does not need "a mind like humans" to have rights. One can easily contend that animals are entitled to rights of varying degrees, but yet they do not think or process information as we do."

Animals have rights (to a certain degree) because of what's known as consciousness. They're granted these rights based on similarities in their brain functions and ours. Consciousness is a biological function. It is billions of microprocessors and many other brain functions that work together in a way that creates consciousness. Just as creating a photosynthesis or digestion artificially in a lab is impossible so is consciousness. If it is possible in the future the Star Trek universe hasn't reached that level of technological advancement.

Though it's hard to pinpoint an exact definition of consciousness we know it exists and I feel as if I've come very close to giving it a good definition. I've discussed the semantic as opposed to syntactic workings of symbol recognition. I've talked of the various levels of alertness/consciousness that Data doesn't seem to have. I've talked about actually feeling emotions as a chemical release that affects your whole body.

Where as when a human feels love his chest gets lighter, when a conscious entity gets mad they might feel it in the body's fight or flight system and feel a rush of adrenalin and blood to their arms and legs. If Data were to show those emotions it would be a response to a bunch of ones and zeros and not a visceral response.

Consciousness

"I can reasonably inquire as to what it is like to be Data insofar as I can experience the world from his unique perspective. That makes him conscious."

I could ask myself what it's like to be a tree and inquire the world from that perspective but it's not a good way to determine consciousness. I'm not sure it would be possible to truly put yourself in Data's shoes. You don't think syntactically and you have billions of micro processors as opposed to a few really big ones you use for data collection and retrieval.

The only way you could actually put yourself in Data's shoes is to presuppose consciousness with in Data which makes the argument circular though in a less obvious round about way.

I think the reason why a very subjective argument for consciousness has to be used like this is because if you talk about a concept like Qualitative character (which is more widely used) that helps to prove consciousness, you'll find Data severely lacking.

"Qualitative character is often equated with so called "raw feels" and illustrated by the redness one experiences when one looks at ripe tomatoes or the specific sweet savor one encounters when one tastes an equally ripe pineapple (Locke 1688). The.......qualitative character is not restricted to sensory states, but is typically taken to be present as an aspect of experiential states in general, such as experienced thoughts or desires (Siewert 1998). If an organism senses and responds in apt ways to its world but lacks such qualia, then it might count as conscious at best in a loose and less than literal sense. Or so at least it would seem to those who take qualitative consciousness in the "what it is like" sense to be philosophically and scientifically central (Nagel 1974, Chalmers 1996)."
http://plato.stanford.edu...

I've already addressed the other arguments in this section with my self preservation argument or by dropping ones such as "intelligence", so moving on........

LEGAL RIGHTS

"Legal rights do not require sentience to be bestowed on someone, insofar as things (e.g. corporations) can have them."

The problem with this isn't that I disagree with my opponent it's that I think that the things he is referring to are separate types of things than a mechanical lifelike entity. We wouldn't say my Roomba has rights. A corporation is a collection of individuals and rights granted to it are really rights granted to sentient beings who have a stake in the company, it would be the same as granting rights to the office of a political candidate. Though technically the rights go to a thing they are meant to benefit the sentient beings who occupy or control those things.

I don't think my opponent will show me things being granted rights that aren't just an extension of a sentient group or some sort of public office holder (which is always sentient.
Debate Round No. 2
bsh1

Con

Thanks once again to Wylted for this debate! Because there is such a huge amount of overlap between are two arguments on consciousness and on sentimentality, I will address those points in my next speech, where I defend my case. I strongly believe that everything I say will be cross-applicable/interrelated. As a result, I will only be addressing three of my opponent's headings at this time: rights, the Chinese room, and information processing.

RIGHTS

Wylted is correct in that this debate is questioning whether Data has rights, but it is not correct to say that it is my goal to prove that Data should be like humans. Consider that many entities have rights, from animals, to corporations, to groups of people, to natural wonders, and so forth. Not all of these things are human-like, not all of them are biological. So, it is important for clarities sake to underscore the main point: it is not my job to show that Data is so like humans that he should be given human-like rights, but rather it is my job to show that there is a justification for giving Data some kinds of rights, particularly, "a right to choose."

THE CHINESE ROOM EXPERIMENT

Before I begin my rebuttals, it is important to note that this portion of Wylted's case is essentially the same as the Information Processing portion of Wylted's case. The information processing argument boils down to the idea that "The computer uses the symbols in a syntactic way following a formula and having no understanding (like the Chinese room experiment)." So, in this sentence, Wylted basically concedes that the two portions of his case are the same or, at the very least, very much interrelated. So, by successfully rebutting the Chinese Room Experiment, I can also rebut the information processing argument.

In order to rebut Pro's argument here, I will be offering multiple different, independently functioning arguments against the experiment. If just one of my rebuttals succeeds, that is sufficient to void this large chunk of Pro's case. So, with that said, on to the rebuttals:

1. Pro's Argument Attacks a Strawman

First, let me explain what the Chinese Room Experiment is; it was designed to "challenge the claim that it is possible for a digital computer running a program to have a 'mind' and 'consciousness' in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that...[says]: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'" [1] What I want to emphasize here is that the Chinese Room experiment is designed to show that a machine cannot have a consciousness that is EXACTLY the same as humans; but that isn't what I have to prove. It is not my job or burden to show that Data is exactly the same has humans in these respects, but merely that he meets broad criteria by which we can grant him a right to choose.

2. The Zombie Reply

Suppose that, by some mutation, a human being is born that does not have Searle's 'causal properties' but nevertheless acts exactly like a human being. (This sort of animal is called a 'zombie' in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is...impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not." [1] The impact of this argument is fairly straightforward: if we assign rights to ourselves, and we are zombies (or cannot disprove that we are zombies), then there doesn't seem to be any distinction between us and a machine of the kind Searle describes, and so we should assign the machine rights as well. (I never thought I would use the word "zombies" on DDO in a serious debate, lol...)

2. The Behavior-Only Reply

"This reply points out that Searle's argument is a version of the problem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior...Nils Nilsson writes 'If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him with real thought.'" [1] In other words, Searle's experiment attempts to know how a machine would think, but this isn't something we can possibly know--esp. with more advanced machines. We have to evaluate based on behavior, and Data is certainly human-like enough that he could integrate into our society and earn rights.

3. The Complexity Flaw

"The speed at which human brains process information is (by some estimates) 100 billion operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require 'filing cabinets' of astronomical proportions." [1] In other words, it would be clear to the fluent Chinese speakers that the person with which they were corresponding did not speak fluent Chinese, because the length of his response times would be massive as he researched what symbols he needed to copy down. So, the experiment itself is flawed.

4. The Brain Replacement Example

"In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins." [1] Here again, we see the difficulty with what Searle proposes, insofar as it is unclear when a mechanical brain actually loses consciousness.

5. The Experiment is Based on Intiution

"Many responses to the Chinese Room argument have noted that, as with Leibniz’ Mill, the argument appears to be based on intuition: the intuition that a computer (or the man in the room) cannot think or have understanding. For example, Ned Block in his original BBS commentary says “Searle's argument depends for its force on intuitions that certain entities do not think.” But, Block argues, (1) intuitions sometimes can and should be trumped and (2) perhaps we need to bring our concept of understanding in line with a reality in which certain computer robots belong to the same natural kind as humans. Similarly Margaret Boden points out that we can't trust our untutored intuitions about how mind depends on matter; developments in science may change our intuitions." [2]

6. The Robot Reply

"The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in a computer room cannot understand language, or know what words mean. The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger--Searle's example of something the room operator would not know. It seems reasonable to hold that we know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting. Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot--a computer with a body--could do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language." [2]

7. The Positronic Brain Argument

Wylted is assuming, as does this example, that Data's brain functions as a normal computer's processor does. Consider: "[b]etter technology in the future will allow computers to understand. Searle agrees that this is possible, but considers this point irrelevant. His argument is that a machine using a program to manipulate formally defined elements can't produce understanding. Searle's argument, if correct, rules out only this particular design. Searle agrees that there may be other designs that would cause a machine to have conscious understanding." [1] Data's positronic brain is one such alternative design that enables him to have consciousness.

"It functions as a central processing unit (CPU) for androids, and, in some unspecified way, provides it with a form of consciousness recognizable to humans." [3] Data is equipped with such a positronic brain [4] and operates in a unique fashion, relying on interplay with anti-elections and positrons. [3] It is not, by any means, what Searle is talking about.

SOURCES

1 - http://en.wikipedia.org...
2 - http://plato.stanford.edu...
3 - http://en.wikipedia.org...
4 - http://en.memory-alpha.org...

Thanks, again! I now turn things back over to Wylted...
Wylted

Pro

RIGHTS

I do want to reiterate my point about rights being applied to merely humans and animals to a lesser degree, and sometimes not at all (see factory farming practices).

" Consider that many entities have rights, from animals, to corporations, to groups of people, to natural wonders, and so forth."

I will challenge my opponent on these. I'm not sure what he means by natural wonders having rights. A lot of times they are afforded certain protections under the law but I don't think they would be considered rights. Us humans despite being destroyers are also in many ways protectors. We're not as good at protecting but we love to keep things like the Everglades or Niagara falls beautiful and functioning so are descendants can enjoy them.

However the Niagara Falls or Everglades don't have rights no matter how they are defined.

When referring to non conscious entities that have rights, such as corporations and groups of people. It's clear these rights are just an extension of the rights of individuals. Rights of corporations are really the rights of shareholders in the corporations. The same with rights attached to groups or elected officials etc.

Animals despite having rights in many cases, don't have rights in a way that helps my opponent's case. They wouldn't have been arguing if a cat owned it's self or if it was owned by the Enterprise, despite the fact that I'd consider it a conscious entity. Picard would have been told to shut the hell up and put in the cat's transfer orders and it would be career over if he didn't.

Cat's aren't conscious in the same way as humans, though it resembles it to a lesser degree. It's because of this different form of consciousness that cats don't get to choose freedom.

Given that Data isn't even conscious (consciousness is a function of biology), he should have no more rights than that of cats, which are conscious.

CHINESE ROOM EXPERIMENT

STRAWMAN

My argument doesn't attack a strawman and my round 1
argument anticipated this response if the readers recall. I'll restate my argument here that preemptively shoots this one down.

From Round 1

"It could be argued that Data is conscious but it's a different type of consciousness. To define consciousness in this way would be a problem because it's a little dishonest. It would be like saying vanilla is a different type of chocolate and goes back to our innate need to humanize everything. It's just like our inclination to see the shapes of human figures in the dark only to cut on the light and see a coat rack.

Consciousness is a human concept and can only be viewed from that angle. We call animals conscious because we can understand it from a human view and see that as being of resemblance to our own consciousness. Dismissing my arguments by calling AI a different type of consciousness just won't work.

Saying consciousness from artificial intelligence is different than human consciousness in an unrecognizable way is just making a nonsensical statement that carries no meaning."

The entire argument for Data being a conscious entity is what Picard was banking on, and it quite simply fails.

Zombie/behavior

I don't like the metaphysical zombie argument in any debate. It's fun as a thought experiment but is useless beyond that. Occam"s Razor when applied to this argument shows how silly it is. Sure I don't know for a fact that Bsh1 is a conscious entity, but I'm not exactly assuming he is a zombie either. We know 2 things for a fact, either BSH1 is a metaphysical zombie or he is a conscious entity.

In order to figure out which we need to take a look within. Though a zombie really wouldn't have an internal thought process that involved sensing Qualitative character. For more o what that is refer back to the quote I used at the end of the last round. If you have this trait you can know you're not a metaphysical zombie, and if you're not an egomaniac, you can apply Occam's razor to safely come to the conclusion that other people are conscious entities.

3. The Complexity Flaw

The speed of the guy's interaction in the CRA if speeded up by a billion, would still show a lack of understanding and a syntactic way of responding to every question. The speed of the operation, doesn't change the relevant parts of the argument.

4. The Brain Replacement Example/7. Positronic brain (whatever the f--k that is

I actually concede this part of my opponent's argument. I don't think it would be impossible for machines to gain consciousness, I just don't believe Data has. He isn't really working with billions of tiny processes working together god knows how. He has significantly fewer that operate completely differently and syntactically. We know this by seeing his confusion at how bluffing works at a the poker game in the beginning of the show but being able to mathematically calculate perfectly what the strength of his hand was.

Later on he is confused at how the thrill of opening a present works and attempts to open it in a way that is optimal (hilarious scene).

So my argument still stands that Data isn't conscious and despite being programmed to appear human in certain ways, he betrays the true way his thinking works in several situations.

CONCLUSION

I do apologize I'm super pressed for time but any direct rebuttal for my opponent's arguments I left out are actually addressed by separate arguments I've already made.

The robot reply is already covered with my discussion of the number of processors we have as opposed to robots (even Data) and how that contributes to the biological process known as consciousness.

I look forward to my opponent's next round and hope I gave a satisfactory response, despite the rushed nature.
Debate Round No. 3
bsh1

Con

Thanks, again, Wylted! I will defend my case at this time.

C1. THE CASE BEFORE THE COURT

Wylted asserts that this trial would have actually been far more drawn out than what was shown in the episode. However, we are given no reason to believe this. In fact, Riker specifically says he will only call one witness--implying that this was all the evidence Riker intended to offer. This alone seems to indicate that the trial portrayed in the show is actually quite accurate to what actually would have occurred. Unless Pro can present some canonical evidence to the contrary, we have to assume that the simplest explanation (that the episode correctly portrayed the proceedings) is true.

I would, at this time, remind everyone that Pro himself said: "Riker put up a bad argument for his side." So, given that the side arguing against Data's having rights gave a bad argument, Picard just needed to provide a better argument. Maddox seemed, on the stand, to concede that Data was self-aware and that he was intelligent. Picard also demonstrated, through the transcript I provided earlier, that Data was self-aware. So, the question merely was whether Data had consciousness. When questioned about it, Maddox was unable to give a response that denied that Data could be conscious. Given the possibility that Data could have met the third criteria (no evidence was provided to the contrary), and the fact that Data met the first two criteria, the court made the best decision it could given the facts before it. It could deny a potentially sentient being rights (doing it a wrong)--and, the very fact that the arguments against it having rights were "bad," the court erred on the side of caution, giving Data the chance to "explore those questions for himself."

Picard's case may not have been good, but it was the better argument made in the room. As for Louvois' supposed conflict of interest, the fact that she ruled summarily against Data, the fact that she was not the type to give in to emotions (she earlier had prosecuted Picard himself), means that she acted fairly.

Frankly, it also seems as if the court should err on Data's side of this dispute, as it would be better to incorrectly grant him rights that incorrectly not grant him rights (thus instituting a regime of slavery contrary to the Federation's core moral and legal values.)

Pro later attempts to demonstrate that Siri is intelligent. But, intelligence is more than mere ability to accrue and regurgitate factoids. Maddox himself implies that the ability to learn is important to intelligence. Learning is more than just software updates, it is experiential, observational, memorizational, etc. Siri can get knew information, but she cannot "learn" in this kind of way. So, Pro's example itself is ridiculous.

Regarding self-preservation, this is also an incorrect piece of analysis from Pro. Data actually kills himself to save the rest of his crewmates, indicating that he wasn't driven by some overriding desire to survive. [1] Moreover, humans are guided heavily by a sense of self-preservation, and this pushes us to conform. Conformity motivated by self-interest is not a good basis to deny someone rights, because humans have rights and we conform for those same reasons. Thirdly and finally, you could also argue that programming Data with a motivation to live is no different from creating in humans a biological drive to survive as well.

C2. MORAL RIGHTS

Pro does not challenge that, as far as moral rights are concerned, the criterion for whether Data has them is sentience. Pro also does not challenge the four elements that I say sentience entails: (1) self-awareness, (2) intelligence, (3) consciousness, and (4) sentimentality. Extend these points. If I can show that Data is sentient, I have won the debate.

SC1. Self-Awareness

Pro does not rebut this portion of my case, so I can only conclude that Pro is in agreement with the idea that Data is indeed self-aware. Extend this point.

SC2. Intelligence

Again, Pro does not rebut this portion of my case, so I can only conclude that Pro is in agreement with the idea that Data is indeed self-aware. Extend this point. At this juncture, half of the criteria needed to demonstrate that Data is sentient have been met.

SC3. Consciousness

Pro writes, "I could ask myself what it's like to be a tree and inquire the world from that perspective but it's not a good way to determine consciousness. I'm not sure it would be possible to truly put yourself in Data's shoes...The only way you could actually put yourself in Data's shoes is to presuppose consciousness with in Data which makes the argument circular "

First of all, Pro gives us no reason to assume that a tree isn't conscious. Pro just seems to intuitively believe that a tree isn't conscious, and thus claim that "what is it like criterion" is faulty, because it could lead us to believe that the tree is conscious. But, intuition is not always correct. My intuition tells me that things cannot be two places at once; physics tells us differently. My intuition tells me that the Sun revolves around the Earth; astronomy tells us differently. So, the tree example really just falls flat.

Pro then claims that I cannot imagine what it is like to be data. Well, neither can I imagine what it is like to be a bat, since I have no echo-locatory sense, and could not reasonable imagine what it is like to have them. But, this is really just a strawman on Pro's part, because the standard is not whether I can imagine being something else, but whether there is something that is like to be that other thing. As my earlier source put it: "a being is conscious just if there is 'something that it is like' to be that creature, i.e., some subjective way the world seems or appears from the creature's mental or experiential point of view." There is some experiential way in which a bat sees the world, so it passes the test. Similarly, there is some experiential way in which Data observes the world, so he, too, passes the test.

Pro's claim about circular argumentation is also off-base. I could imagine what it is like to be a rock--I would imagine nothingness, because a rock cannot experience anything. So, putting myself in the shoes of another doesn't presuppose consciousness. But, Pro's claim is also a strawman for the exact same reason his earlier claim was a strawman: "the standard is not whether I can imagine being something else, but whether there is something that is like to be that other thing."

Next Pro offers a competing standard of consciousness called Qualitative Character. I will offer three rebuttals to this standard:

1. Pro gives us no reason to prefer this standard over the one I offered, other than to say that it is more widely used (which is fallacious, because that doesn't tell us whether the standard is actually more accurate.)

2. The idea of qualia relies on there being inherent essences of experiences, which has yet to be established in this debate. In fact, it is possible that no such essences exist, and Pro is relying on a faulty appeal to intuition to establish their existence. It is a mistake to "attempt to reify 'feeling' as an independent entity, with an essence that's indescribable. As I see it, feelings are not strange alien things. It is precisely those cognitive changes themselves that constitute what 'hurting' is--and this also includes all those clumsy attempts to represent and summarize those changes. The big mistake comes from looking for some single, simple, 'essence' of hurting, rather than recognizing that this is the word we use for complex rearrangement of our disposition of resources." [2]

3. Data meets this standard. "[Q]ualia is 'an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us.'" [2] Pro's standard is actually remarkably similar to mine, in that if there is something that it is like to be X, there are ways in which the world seems to X. For example, a cave seems different to a bat than it does to me, but since a rock cannot experience the cave, there is no way the cave seems to the rock. So, insofar as there is a way things seem to Data, Data passes this standard of consciousness.

SC4. Sentimentality

Again, Pro does not rebut this portion of my case, so I can only conclude that Pro is in agreement with the idea that Data is indeed self-aware. Extend this point. At this juncture, 3/4ths of the criteria needed to demonstrate that Data is sentient have been met.

C3. LEGAL RIGHTS

Sure, a corporation is a collection of people, but, in itself, it is not a person. Sony Pictures cannot, itself, be self-aware or have a consciousness or be intelligent--it is an organization, and non-biological entity. But, we recognize that it has interests. We don't grant it rights simply because it is a group of people (otherwise any group of people could have a collective set of rights), but because it has interests.

Pro NEVER rebuts that interests are the basis of legal rights. Insofar as Data has interests, he can be awarded legal rights that protect those interests. A Roomba doesn't have interests that can be protected. Data (a) can benefit from a legal protection (exercise his ability to choose, participate in society, engage in pastimes, fulfill his whims), and (c) can exercise that right or seek its enforcement. Thus, Data is the kind of entity to which legal rights can be given.

FINAL NOTE

I just also wanted to point out that Pro's repeated use of expletives during this debate (despite the fact that they are partially obscured) may violate Rule No. 4, though I will leave the interpretation of this issue to the judges' discretion. I just wanted to point this out.

SOURCES

1 - http://en.memory-alpha.org...
2 - http://en.wikipedia.org...

Thank you! With that, I now turn the floor back over to Pro...
Wylted

Pro

Thank you for for allowing me to debate this with you BSH1. This was a fun debate and I wish All my debates were like this.

CASE BEFORE THE COURT

"Unless Pro can present some canonical evidence to the contrary, we have to assume that the simplest explanation (that the episode correctly portrayed the proceedings) is true."

Well then, I'd say that my analysis that the judge didn't have enough info to make a ruling is fair and she should have went with what was originally decided as opposed to setting a new precedent.

I assume she knew she had too little info to go on and is why she conceded that she was unqualified to make a ruling just before ironically making a ruling.

"I would, at this time, remind everyone that Pro himself said: "Riker put up a bad argument for his side." So, given that the side arguing against Data's having rights gave a bad argument, Picard just needed to provide a better argument"

I'd say both sides made a bad argument, which is beside the point. Picard was fighting against the status quo and needed to overcome a heavy burden of proof.

If his arguments can be applied to Siri, then logically it follows that they can be overly applied and that the judge should have maintained her original ruling.

" It could deny a potentially sentient being rights (doing it a wrong)--and, the very fact that the arguments against it having rights were "bad," the court erred on the side of caution, giving Data the chance to "explore those questions for himself."

I don't think it did err on the side of caution. They are setting a precedent with this decision and lacked conclusive (or any) evidence of Data's sentience.

When stating something as extraordinary as a machine having consciousness than you better have some extraordinary evidence to match, otherwise you risk future starships refusing to hand over far inferior androids or machinery under the same reasoning applied to Data.

If these same arguments can be applied to my cellphone then the court was being hasty. I can see how it's easy to anthropomorphize Data but it's inappropriate to do so.

"learning is more than just software updates, it is experiential, observational, memorizational, etc. Siri can get knew information, but she cannot "learn" in this kind of way. So, Pro's example itself is ridiculous."

Believe it or not, Siri actually learns experientially, and observationally. When you speak to her she learns your accent as well as the things very unique to your voice. She then sends that to a central database that uses that information to understand other people easier as well. http://www.siriuserguide.com...

That's her senses taking in your voice and her observing it to pick out key characteristics as well as use new ones.

"Regarding self-preservation, this is also an incorrect piece of analysis from Pro. Data actually kills himself to save the rest of his crewmates, indicating that he wasn't driven by some overriding desire to survive."

I'm not even sure why this is brought up. Data's programming to have some sense of self preservation would contribute to his main goal of helping humans in the best way possible. This means that, though self preservation necessarily be programmed in, it's not the machine's main function and takes a back seat when the good of the whole is at stake.

MORAL RIGHTS

Self Awareness

I've addressed this point by giving you a transcript of a conversation I had with Siri speaking in a way that could indicate self awareness just as Data had. If Siri is capable of communicating in a way that would indicate introspection and relay to me how she is feeling then common sense tells us that self awareness is pretty hard to demonstrate and Data answering a few questions just as Siri has proved nothing.

Intelligence

Agreed Data is pretty smart. Almost as smart as my Siri. Remember that Data has to meet all the criteria my opponent mentions not just a few.

Consciousness

This is the only portion of my opponent's case worth getting into because it's really where we butt heads.

"pro's claim about circular argumentation is also off-base. I could imagine what it is like to be a rock--I would imagine nothingness, because a rock cannot experience anything."

That's right, you can't apply this to a rock in this sense. That's because you aren't presupposing a consciousness into the rock as you would a bat.

If you presupposed consciousness onto the rock than you could imagine yourself as a rock feeling the warmth of the sun.

" Pro gives us no reason to prefer this standard over the one I offered, other than to say that it is more widely used (which is fallacious, because that doesn't tell us whether the standard is actually more accurate"

The standard I mentioned is better because it's how you would come to understand your own consciousness. It's through introspection that you learn it and you know you're conscious because of these qualitative qualities.

The same can't be said about imagining a bat which allows you to project your beliefs about a bat onto your imaginings.

LEGAL

" Pro NEVER rebuts that interests are the basis of legal rights. Insofar as Data has interests, he can be awarded legal rights that protect those interests. A Roomba doesn't have interests that can be protected."

I most certainly do in the way you define it. It's the interests of conscious beings that are protected most notably humans.

It's a collection of humans, who's rights are protected. Conscious living beings. We know that consciousness is a biological phenomenon such as digestion or photosynthesis and rights aren't granted outside of conscious entities regardless of how advanced that technology is or how much it is programmed to mimic human consciousness.

FINAL NOTE

Even mentioning rule number 4 at this point is petty in my opinion. The spirit of the rule is to prevent hostile exchanges, trolling or worse.

In a select winner system it's only really bad decorum or in civility that would be punished.

I urge voters to reread my last round to remind themselves of what I haven't brought up here.

Thanks for reading.
Debate Round No. 4
bsh1

Con

Thanks to Wylted for a great debate. I will, at this time, rebut Pro's case and crystallize/give some reasons to vote Con.

RIGHTS

Pro spends most of his time discussing natural wonders here, but really that's just a single example. I think that the example of corporations is sufficient to make my point here. Pro contends that the "[r]ights of corporations are really the rights of shareholders." But this doesn't seem true. Just because shareholders benefit from the fact that corporations have rights, doesn't mean that the shareholders are rightholders in themselves. Insofar as we recognize that corporations, which are non-conscious, non-biological things, are the kinds of entities that can be right-holders unto themselves, then it seems like it is no ridiculous to consider giving rights to an android.

The reason that a cat doesn't have a right to choice, furthermore, is not that it isn't conscious, but that it doesn't have a meaningful level of autonomy. Humans have a high degree of autonomy and independent action that cats just don't, so we have a much greater interest than a cat in protecting our autonomy. But, it would be wrong to torture a cat just as it would be wrong do so to a human because both of us can feel the pain.

But, even if you buy nothing I have said here, it doesn't really matter, since Pro has already conceded to the four criteria of sentience, we can use those to ascribe him moral rights.

THE CHINESE ROOM EXPERIMENT

Con concedes that much of his case is predicated on this experiment. If this example is sufficiently undermined, his arguments about information processing and even his arguments that consciousness is a biological process are defeated. Regarding the latter, Pro writes, "You don't create a consciousness with a computer program." Pro only has two pieces of evidence for that: the experiment and the many processors argument (which I rebut with my positronic brain argument.) So, frankly, by defeating this example, almost all of Pro's case will be wiped out.

It is also important to reiterate here that I just need one of my rebuttals against Pro's argument here to succeed in order to void the example. Each of my objection functions independently and individually invalidates the experiment. To that end, I may not defend all of my objections, but simply because I drop one or two that doesn't mean I have lost. In fact, I want to focus on those objections that most clearly demonstrate how I have successfully defeated Pro's logic.

1. Pro's Argument Attacks a Strawman

Recall what I said last round: "What I want to emphasize here is that the Chinese Room experiment is designed to show that a machine cannot have a consciousness that is EXACTLY the same as humans; but that isn't what I have to prove." Pro then responds by claiming that saying there are different kinds of consciousness is "like saying vanilla is a different type of chocolate and goes back to our innate need to humanize everything."

Unfortunately, Pro doesn't give much of a warrant for this analysis. His only piece of logic to support this is that consciousness is a human concept, so it can only be viewed from a human angle. But, if it is a human concept, why can we not alter how we view and understand it? Surely, if it is humans who decide what consciousness is, then we can change what the notion means whenever we wish. Moreover, just because it is a human concept doesn't mean that humans cannot recognize a gradient of consciousness; even in real life we acknowledge that there may be varying levels of consciousness, with people who a mentally disabled having less that someone who is totally aware, for example.

Consciousness is not so much like vanilla and chocolate, but rather it is like 50 Shades of Grey. Humans like one shade, but that doesn't mean we cannot conceptualize or appreciate the other shades of kinkiness out there.

2, 5, 6. The Behavior-Only Reply, The Experiment is Based on Intuition, and The Robot Reply

These are all DROPPED by Pro. Any one of these, as per my earlier reasoning, is sufficient to wipe out Pro's case. Please extend all of these. Just to briefly review each:

The behavior-only reply shows that Searle's experiment attempts to know how a machine would think, but this isn't something we can possibly know. Since we cannot actually know how a machine would think, we have to evaluate based on its outward behavior, and Data is certainly human-like enough that he could integrate into our society and earn rights.

The intuition arguments basically shows how the experiment is a flat out appeal to intuition, and thus establishes nothing, as intuition is unreliable. Specifically, it appeals to "the intuition that a computer (or the man in the room) cannot think or have understanding."

Finally, the robot reply argues that, "[i]t seems reasonable to hold that we know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting. Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot--a computer with a body--could do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language."

7. The Positronic Brain Argument

What is important to note here is that a positronic brain actually does create human-like consciousness, as my sources note. Sure, Data may do some things in a non-human way (e.g. opening presents in a optimal fashion), but this isn't evidence of his lack of consciousness--rather, it is an example of how he is still learning an adapting.

Finally, since neither Pro nor I knows what a positronic brain entails, it isn't possible for anyone to claim, as Pro does, that Data "isn't really working with billions of tiny processes working together." So, this claim by Pro is just totally lacking in substantiation.

It is also important to point out a big contradiction in Pro's argument. Pro writes, "I don't think it would be impossible for machines to gain consciousness." So, if machines can gain consciousness, clearly consciousness is not solely a biological process.

VOTING ISSUES

VI1. THE CHINESE ROOM EXPERIMENT

Frankly, with this portion of his case refuted, Pro doesn't have much of a case left at all. This really just takes huge portions of offense away from Pro. Having dropped three of my objections against his case, and leaving two other objections against him still viable, it is really not possible to buy into much of what Pro argues. Most importantly, his first two headings (The Chinese Room Experiment, Information Processing) are totally defeated. Additionally, his argument about consciousness being biological is also taken out, since it relies in large part on the experiment argument, and since it relies on Pro being able to prove that Data's brain isn't made of millions of processors (which Pro cannot prove).

VI2. MORAL RIGHTS AND SENTIENCE

Pro agrees to the four criteria of what constitutes sentience (self-awareness, intelligence, consciousness, sentimentality) and he agrees that what is sentient deserves moral rights. Consequently, if I have shown Data to be a sentient being, I have shown him to be a right-holder. At this time, I will review each of the criteria an explain how I have met them in this debate.

Self-awareness: Recall that self-awareness is the ability to engage in "introspection and the ability to recognize oneself as an individual separate from the environment and other individuals." Siri may be able to describe its location and talk in the first person, it may even be able to recognize itself as separate from the environment, but it cannot reflect introspectively. As I noted in R2, "Data is also able to critically examine his own thoughts and ideas, and to reflect on his goals and priorities, demonstrating a power for introspection." So, Data is self-aware. Moreover, even if Pro shows Siri to be self-aware, that doesn't mean that suddenly Siri ought to have moral rights, since it fails on other points of the checklist, whereas Data doesn't. So it is absurd to imply that I would somehow grant rights to Siri.

Intelligence: Pro concedes that Data meets this criterion.

Consciousness: Data passes both criteria presented in the round for consciousness (what it is like, and qualitative character). Firstly, "what it is like" isn't circular because, as I said last round, "Pro's claim is also a...strawman: 'the standard is not whether I can imagine being something else, but whether there is something that is like to be that other thing.' Secondly, Pro never rebuts the fact that Data meets the "what it is like" criterion, rather, he just attacks the criterion itself. So, if the criterion works, since Pro doesn't challenge the fact that Data meets it, Data is conscious. Thirdly, Pro never challenged the fact that Data also met Pro's Qualitative Character criterion, so Data is proven conscious under that standard as well, even if you don't buy into the "what it is like" metric.

Sentimentality: First, Pro talks about Data's instinct of self-preservation, but then commits a goal-shifting fallacy when he says, out of the blue, that the good of the whole should trump. Really, all of this is just speculative on Pro's part. Given the evidence that I presented in R2, particularly Data's keepsakes, it seems that Data has sentiments. Remember, we cannot be sure how Data thinks, so we have to evaluate on outward behavior (as noted earlier.)

For all of these reasons, I ask you to please VOTE CON. Thank you!
Wylted

Pro

No round as agreed upon.
Debate Round No. 5
130 comments have been posted on this debate. Showing 1 through 10 records.
Posted by whiteflame 4 months ago
whiteflame
I agree that, if the logic is flimsy, it can be easy to dismiss even a large argument in a sentence or two. I also agree that you can present a great deal of evidence in a short space, though typically, if you're presenting evidence, I'd say it's better for everyone if you actually take the time to draw from that evidence as part of your argument, which usually takes more than a single sentence. As for your example, I'm not quite clear on what the resolution would be in that instance, so I can't say whether or not that response would be effective. Nonetheless, I recognize that even if someone presents a very long and eloquent argument, there can be holes in some of the links that can easily be exploited, so long as they are dependent on those links for their arguments to work. I just don't think that those kinds of short responses accomplish as much when you're up against a point that has bolstered links.
Posted by Wylted 4 months ago
Wylted
Searle's arguments feel unnatural to make also. I don't think consciousness is actually a real thing. We are basically complex biological machines, whose brains give us the illusion of consciousness. The reason scientists and philosophers have never found the "man in the brain", is because he doesn't exist. Data is no more or less conscious than us, or a can of Pepsi.
Posted by Wylted 4 months ago
Wylted
I wouldn't mind redoing something like this in the future again and shift away from the Chinese room argument and focus more on how we should define rights philosophically speaking. It would be a similar argument as I would use for an animal rights debate, but would be more fun when applied to androids.
Posted by Wylted 4 months ago
Wylted
actually nevermind. I did start rereading it a bit. Your parts not mine, you put forward some good arguments. I'm trying to remember my responses and can't. I can't imagine being able to have a good response to your argument using philosophical zombies for example. Maybe I had a good response to that, but I doubt it. I like how everyone said it was a close debate. It sure as hell doesn't look close based on the score, but maybe it was.
Posted by Wylted 4 months ago
Wylted
Oh okay, maybe I will reread it later. I know in my debate with YYW people said I kept dropping his points about prices of things being higher if a fairtax was implemented, when earlier in that debate I already showed studies that proved the prices would drop or remain stable based on all the hidden taxes within a product that would be dropped if the fair tax was implemented. It was his conjecture vs my solid proof, and conjecture seemed to win, based on the fact I "dropped" the argument.

I'll buy your point that I dropped arguments in this, but I typically make it a point to never do that. perhaps I thought one of the contentions I had already acted as a rebuttal, so I did not offer a response and expected judges to figure out how.

If you remember an argument I dropped do you mind pointing it out, so I can do a quick scan to see how I missed it?
Posted by bsh1 4 months ago
bsh1
Wylted, this wasn't a case of addressing things in a few words--though Whiteflame is right. This was a case of you simply not addressing several arguments with any words.
Posted by Wylted 4 months ago
Wylted
I feel like it only takes a few words to dismiss a point in many circumstances, more elaboration by your opponent should not be an automatic disadvantage. A refined absurdity is still an absurdity. For example if my opponent tells the story of how the book of Mormon came to be and is very elaborate about the witnesses and physical evidence that disappeared, merely providing documents that show Joseph Smith was a convicted con artist who specialized in faking antiquities should be enough and really only takes a sentence to prove.
Posted by whiteflame 4 months ago
whiteflame
Touching on a full argument with a few words is almost never enough, Wylted, and it's not because judges overlook it, though that may be part of the problem. The main issue is that, when your opponent spends more time elucidating an argument, it usually includes the addition of warrants, evidence, and links that bolster the point. When you respond with a few words essentially dismissing the whole thing on the basis of one potential fault that you don't fully explain, of course the voters are going to prefer the spelled out argument that includes a great deal of logical support. We're going to ignore the few words you put on it because that's basically equivalent to dropping the point.
Posted by Wylted 4 months ago
Wylted
People keep accusing me of dropping arguments when I don't. I am not going to reread this but you are probably referring to something I touched on in a few words, but the judges overlooked. It's why sometimes I over elaborate on things when 3 words will do. People tend to ignore stuff you don't drill into their head.
Posted by bsh1 4 months ago
bsh1
Well, what you did wrong was dropping most of my arguments...
4 votes have been placed for this debate. Showing 1 through 4 records.
Vote Placed by debatability 2 years ago
debatability
bsh1Wylted
Who won the debate:Vote Checkmark-
Reasons for voting decision: con wins on moral/legal rights; rfd in comments
Vote Placed by Raisor 2 years ago
Raisor
bsh1Wylted
Who won the debate:Vote Checkmark-
Reasons for voting decision: RFD in comments
Vote Placed by whiteflame 2 years ago
whiteflame
bsh1Wylted
Who won the debate:Vote Checkmark-
Reasons for voting decision: Given in comments.
Vote Placed by YYW 2 years ago
YYW
bsh1Wylted
Who won the debate:Vote Checkmark-
Reasons for voting decision: I think requiring Picard to make some "deep philosophical arguments" is probably a bit excessive, especially if we're considering that it is not necessary to have a mind like humans to have rights (as is the case, because we extend rights to animals) although PRO is correct to distinguish the kinds of rights have from the rights that humans have. They are not the same, but so too might the rights that androids have be different from people and yet still be rights in fact. I'm not sure that a bio/physiological response to stimuli is both necessary and sufficient for rights, though. That said, if the ability to process information is enough, then CON must win -and PRO didn't really have a direct response to that other than to distinguish the data processing methods of humans and computers. This is a very close debate, but I think that CON takes the win here.