The Instigator
Demauscian
Pro (for)
Losing
9 Points
The Contender
J.Kenyon
Con (against)
Winning
22 Points

Cleverbot is a sign of Artificial Intelligence's eventual rise to sentience

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 7 votes the winner is...
J.Kenyon
Voting Style: Open Point System: 7 Point
Started: 11/16/2010 Category: Technology
Updated: 6 years ago Status: Post Voting Period
Viewed: 5,453 times Debate No: 13683
Debate Rounds (4)
Comments (18)
Votes (7)

 

Demauscian

Pro

Definitions:
Cleverbot-a computer program that learns to mimic speech (see cleverbot.com)
Artificial Intelligence-A man made machine which can imitate intelligence
Sentience-The ability to feel or perceive. (for this debate, Sapience will be assumed to be synonymous)

First I must dispel a common myth. Cleverbot is not a hoax, many believe that Cleverbot simply hooks you up with another anonymous user and the two of you communicate, thinking the other is the computer. This is not the case, but many times can appear to be so. The reason for this confusion is as follows;
As Cleverbot 'talks' with humans, it keeps track of the trends in the user's responses and when it determines the moment right, it will spit back out an edited version of the original response that was originally written by a human.

I am not arguing that Cleverbot is sentient, nor that it will become sentient. My argument is that giving computers such as Cleverbot the ability to learn speech will enable them to evolve sentience.

This debate may likely come down to whether a computer can become sentient.

I will await my opponent's approval of these definitions and their addition of any other points before I construct my argument.
J.Kenyon

Con

Thanks, Pro. Artificial intelligence is an idea that's been around for decades, yet we've made little tangible progress toward achieving it. I'll begin with evidential arguments against "weak" AI before moving on to a deeper critique of "strong" AI, or the possibility that computers could become conscious or sentient.

1. Things computers can't do

Early on the framing problem was recognized as a significant barrier to the creation of a truly "smart" machine. Basically, it asks how it is possible for a computer to know which information affects other information. Computers lack common sense and defeasible reasoning capability. In order to understand a narrative, we take certain things for granted. To illustrate this, consider the following story:

Max goes to a restaurant and orders a hamburger. An hour later, he is served a burnt hotdog. Angrily, he marches out without paying.

Do you think Max ate the hotdog? Did he leave the waitress a tip? The story makes neither of these things explicit, yet the ability to draw out such implications is crucial to being able to truly understand it.

In an influential paper, Hubert Dreyfus draws out four implications of the framing problem. Computers lack "fringe" consciousness, or the ability to focus on one task while having background knowledge of other things. They lack the ability to distinguish between relevant and irrelevant information; for computers, all data has the same importance. Computers struggle with words that have multiple meanings (to bank an airplane, to sit by the river bank, to make a deposit at the bank). Lastly, computers have problems categorizing data according to defining attributes.

The Cyc project is brute force attempt to systematize common sense knowledge. The database contains bits of information like "lettuce is not a tree," "a tree is a plant," "plants die eventually." Naturally, most AI experts are skeptical.

2. The Chinese room

The philosopher John Searle argues that there is a qualitative difference between simulated machine "consciousness" produced by syntactical understanding and human understanding that results from semantic understanding. Imagine a computer that "understands" Chinese; it takes Chinese characters as input and following a program, produces other characters as output. Looking at a transcript of a conversation between the machine an an actual Chinese speaker, it would be impossible to distinguish them. Now imagine a book version of the program written in English. Suppose a person who doesn't know a word of Chinese is placed in a room with the book. Chinese characters are slipped in under the door, and, using the book, the man in the room slips other characters out. Does the man actually *know* Chinese? Of course not! He's just following a complex set of instructions.

:: Conclusion ::

Even in the unlikely event that a machine passes the Turing test, it's doubtful that they could ever become sentient.

The resolution is negated.
Debate Round No. 1
Demauscian

Pro

Pythagoras said, "Everything is number."
-
I will make a series of arguments which I will tie together at the end:
A. My first point is that the communication of complex ideas is critical for Intelligence. This is the number one thing that sets us, as humans, apart from most all other animals. Many higher animals have the potential for higher intelligence, but lack the means by which to share complex thought, thus the wheel would have to be constantly reinvented.

B. Humans arose to their current stance through random mutations in the form of evolution. In the 1960's John Henry Holland pioneered the field of genetic algorithms, a process by which computers solve problems through a similar process to natural evolution and natural selection. Holland showed that computers could evolve their programming to solve complex problems in ways that even their creators did not fully understand.
when asked, Holland said, "Computer sentience is possible, But for a number of reasons, I don't believe that we are anywhere near that stage right now."

C. Modern computers can compute at about the level of a nematode, stimulation and reaction. And yet, given more computing power, further species have gained consciousness. Super computers are expected to rival human processing power by sometime in the 2030's.

If computers reach a point where they can communicate complex ideas (understand language, such as from data from programs like Cleverbot) then they can be made to learn anything. If human consciousness can come out of chaos, then through the evolutionary process described above this could lead to computer consciousness. Since everything is number, a computer may theoretically understand it.

your points:
1. With current processing power, a computer has to filter through data to produce an answer. But as Processing power increases either the computer will be able to recognize patterns or be able to run brute force at a speed equal to or passing that of humans. And I would argue that something can be conscious even with brute force, if it can perceive.

2.The Chinese room is an interesting thought experiment. Here is another, the Problem of other Minds. It states that we cannot know if another person is sentient, or if they are a so called "Philosophical zombie." Naturally if this cannot prove or disprove human consciousness then it cannot prove or disprove computer consciousness.

Also, I would argue that the problem with Searle's argument is it focuses on the person who doesn't know Chinese. But that man is a part of the man, the book, the room, and the cards system which collectively knows Chinese. Just as an individual synapse is not conscious of our thoughts this man, as only a part of the system, is not aware of the consciousness of the whole system.
::Conclusion::
Just as animals went from non-conscious to conscious, computers will follow. Cleverbot shows that a computer can learn, which is key to becoming self-aware
J.Kenyon

Con

== Pro Case ==

1a. I agree that the ability to communicate complex ideas is a *necessary* condition for intelligence and sentience, however, Pro has yet to demonstrate that it is a *sufficient* one. The way a computer syntactically processes language is not qualitatively different from the way it processes numbers.

b. Although Holland's approach seemed promising at first, virtually nothing has come of it since then. Early on, it was expected that smart machines were just around the corner. In 1965, H. A. Simon wrote "machines will be capable, within twenty years, of doing any work a man can do." I'm reminded of Bush's infamous "mission accomplished" speech from the deck of the USS Abraham Lincoln.

c. Overly optimistic predictions like this have been going on for years. In 1970, Marvin Minsky claimed "in from three to eight years we will have a machine with the general intelligence of an average human being." AI has been a field of intense research and focus for over a half a century and so far all we have to show for it is the processing power of a single celled roundworm? This does not seem to bode well.

== Con Case==

1. A careful re-reading of my points will reveal that it is not computing power that is at issue, but the nature of how computers function. To imitate human cognition requires incredible amounts of coding, leading many AI experts to conclude that it simply isn't possible. Moreover, processing speed is insufficient to endow a machine with human understanding, or even an imitation thereof. Computers can already process numbers trillions of times faster than we can, however, my pocket calculator clearly is not conscious.

2. Pro attempts to defend functionalism by bringing up Chalmers' philosophical zombies thought experiment and the problem of other minds. This is not substantive; essentially, he answers the hard problem of consciousness by pointing out other difficulties within his own theory! I'm skeptical of his reply to Chinese room argument as well. His response is a common one; Searle countered by asking "well what does the room add to it?"

Ned Block offers an even stronger version of the same idea in his Chinese nation argument. Suppose the entire population of China was assembled, each person modeling a neuron and using phones and walkie-talkies to simulate axons and dendrites. The "mental state" of the entire project can be seen by satellite. The China Brain is connected to a body that provides sensory input and receives volitional output via radio. It seems implausible that the China Brain is somehow conscious.

:: Conclusion ::

Pro has not made it clear how semantic understanding can arise from syntactic understanding. The issue is not one of computing power, but a qualitative difference in processing methods. His answer to my attack on functionalism is essentially a concession -- even worse, he points out additional problems within theory.

The resolution is negated.
Debate Round No. 2
Demauscian

Pro

My use of the other minds problem simply shows that arguments like that of the Chinese Room cannot prove or disprove something's consciousness. For all I know, Neg could be a philosophical zombie who was pre-programmed to respond. I cannot know for sure whether he is or isn't conscious, all I can know is that in my interaction with him I feel as if I am talking with an intelligent being. So I must assume he is conscious.

If we were to look at this as a solipsist, then we can neither say something is conscious nor not conscious, thus this debate is moot, and we ought to call it a tie and go home.
To avoid this such solipsist arguments ought to be thrown out.

I don't see how the "China Brain" is a problem. This can be described as a collective consciousness, and as long as Abilene Paradox exists, I must assume an independent will exists.

I don't know where Minsky and Simon got their notions from, but Moore's law suggests what I claim. Allow me to go through the following though process.

1. Computers are getting exponentially more powerful.

2. This growing power allows computers to accomplish tasks that were considered infeasible
several years ago. It is reasonable to suppose, therefore, that many things we
think of as infeasible will eventually be done by computers.

3. Now pick a set of abilities such that if a system had them we would deal with it as we would
a person. The ability to carry on a conversation must be in the set, but we can imagine
lots of other abilities as well: skill in chess, agility in motion, visual perspicacity, and so
forth. If we had a talking robot that could play poker well, we would treat it the same
way we treated any real human seated at the same table.

4. We would feel an overwhelming impulse to attribute consciousness to such a robot. If it
acted sad at losing money, or made whimpering sounds when it was damaged, we would
respond as we would to a human that was sad or in pain.

5. This kind of overwhelming impulse is our only evidence that a creature is conscious. In
particular, it's the only real way we can tell that people are conscious. Therefore, our
evidence that the robot was conscious would be as good as one could have. Therefore the
robot would be conscious, or be conscious for all intents and purposes.

Is it not selective of me to look at such a machine and say that it only imitated consciousness when I look at a person, who exhibits the same attributes, and say that they were? If I believe it exists, it very well may.
Here I will point out that once many people believed that no other animal had consciousness, even today a small number still hold this belief, but animal consciousness is inferrable.

We cannot assume that since something is currently not conscious then later versions of it will not be. As I pointed out, early life was not conscious, but merely instinctual. But this gave rise to consciousness.
J.Kenyon

Con

== Pro Case ==

1. Moore's law demonstrates nothing about AI. Computers are already exponentially more powerful than they were in the 60's and 70's, yet they have hardly gotten any "smarter." Whatever advances have taken place are owing to better programming. Consider the Chinese room argument: would it make any difference if there were several men in the room speeding the process along? Processing power is not the issue.

2. Greater computing power only allows them to crunch numbers with greater efficiency. This is useful for things like finding prime numbers, or processing high-resolution MRI's. As I already pointed out, increases in computing speed have not been matched by corresponding advances in AI. The problems of Hubert Dreyfus's "things computers can't do" still apply.

3 & 4. Treating computers that perform certain human tasks as if they were human is not the same thing as believing the computers are conscious. The question of whether or not hypothetical smart machines would have moral value is a separate issue.

== Con Case ==

1. Again, the difference between minds and computers is not quantitative, but qualitative. In a more recent paper, John Searle has laid out his argument in formal terms:

(P1) Programs are formal (syntactic).
(P2) Minds have mental contents (semantics).
(P3) Syntax by itself is neither constitutive of nor sufficient for semantics.
(C1) Therefore, programs are neither constitutive of nor sufficient for minds.
(P4) Brains cause minds.
(C2) Therefore, any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
(C3) Therefore, any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.

2. The "other minds" reply still isn't substantive. Searle argues that this stems from a misunderstanding of the point. The issue is not how we know if something is conscious, but what it is that we are attributing when we attribute cognitive states to something. It can't be merely computational processing and output data, because the processing and the output data can, in theory, exist without any cognitive states. Further, if we take a completely functionalist approach to mind, we would have to accept that it's possible something could be "conscious" without having qualia, which is an unacceptable conclusion.

:: Conclusion ::

Yes, early life was instinctual, however, conscious beings operate in ways radically different than nematodes. This process is so incredible, we still don't understand how exactly it works. As of now, our progress toward achieving AI is roughly equivalent to the progress the first man to climb a tree made towards landing on the moon.

The resolution is negated.
Debate Round No. 3
Demauscian

Pro

The way my opponent handled the above mentioned thought experiment shows that he essentially did not understand it.
He puts most of his attack on the first and second step, which are preliminary points that set up the scenario, not the idea itself. I completely agree that Moore's Law has nothing to do with AI, and I did not say it did, i mentioned it as a necessary assumption for the thought experiment.
He skims over 3&4, vaguely talking about them and treating them as one step.
But 5 was completely skipped, and part 5 was the entire weight of the exercise.
-
It seems that I have made little mention of the Syntax/Semantics argument, a shortcoming I intend to rectify:

It does not follow that the symbols manipulated by the computer program cannot have semantic properties -- all that is shown is that they cannot have these in virtue of the rules of the program. In other words, the semantics of any situation can pass through a syntactic function intact.
The Mind,on the whole, is semantic, but it is important to note that parts of the mind can be said to be purely syntactic.The semantic properties we are interested in can be preserved during the appropriate syntactic manipulation.
The computer I'm using works syntactically, it works with 1s and 0s to make lights flash in different colors. I can write an emotional passage and send it via email. Since the reader cannot see my body movements and hear my tone fluctuation, they must not be able to know the emotion involved. And yet, emotion can fully be conveyed through purely syntactical means to them.

Furthermore, the use of syntax and semantics can be done for the same function with little discernible difference.
A math problem, say: 3=4x-17 This can be solved in one of two ways, syntactically there is a set of rules by which this problem can be simplified. But semantically, we can think of this as in equilibrium and our operating must keep that balance. Often times one cannot know by which method they worked.

Does this not shatter the matter-of-fact status of understanding, of the presence of Searle's semantic level?
-
All other points I feel I've covered at least to my satisfaction.
-
::Conclusions::
When ascribing mental states (like understanding Chinese) to other people, one must do so based on their behavior. By analogy, one must be willing to ascribe mental states to computers on the basis of their behavior.
When that behavior causes a human to feel Empathy (defined as the capacity to share the sadness or happiness of another sentient being through consciousness rather than physically), it is assumable that sentience exists both on the giving and receiving end. But a lack of empathy does not necessarily mean sentience is lacking (see autism)

Just as the nematode is radically different from conscious beings, today's computers are radically different from possible conscious computers. And just as nematodes lead to conscious beings, computers may as well.

Vote for Pro.
J.Kenyon

Con

== Pro Case ==

I placed my emphasis on on the first two steps of Pro's argument because they effectively form the foundation of his case. Without the ability to create a computer that simulates intelligence, the question of whether or not the machine is actually sentient is completely irrelevant. I'd like to point out that Pro has largely ignored these points, which I laid out in the first round. He's been forced to retreat to the possible: we *might* be able to make a smart machine sometime in the distant future. I think I've made a strong case that this is, at the very least, unlikely. Although I didn't address step 5 together with the other points, I did cover it in the "Con Case" of my rebuttal.

Pro's math analogy, if I understand it correctly, creates more problems than it solves. He implies that functional output can be arrived at via different methods. This is the very reason why behavioral theories of mind have been largely rejected. An incorrect formula may yield the correct answer by chance. Behavioral states tells us nothing about intentional states. For example, suppose Joe loves Mary. He can be heard dreamily repeating her name and staring at her whenever she is present. He even sends her flowers. However, his behavior is equally compatible with the possibility that Joe *hates* Mary. He repeats her name over and over again, thinking about how much better the world would be without her. He stares at her trying to make her feel uncomfortable. He sends her flowers because she is allergic to them and he wants her to die. Philosophers and psychologists have been unable to formulate complete behavioral explanations for mental concepts. Indeed, there is good reason to tihnk that such a task is not even logically possible. Thus, Pro's behavioral account of consciousness should be disregarded.

== Con Case ==

Pro claims that semantic meaning can be "preserved" through syntactic processing. However, this does not resolve the issue. An emotional email only has semantic content to the reader. The computer is just as blind to the semantics it conveys as a telephone line, a telegraph cable, or the pen, paper, and envelope I use to write and mail a letter.

The issue Searle raises in his formal syntax/semantics argument is twofold: (1) semantics cannot arise from syntax, and (2) we know mental contents arise from brains, yet we still don't understand *how* this occurs. Thus, it absurd to discuss machine consciousness when we still don't understand our own consciousness.

:: Conclusion ::

It's important to note that Pro has made a very weak case for the overall plausibility of his case. He has focused largely on defending against my attacks on "strong" AI while neglecting to support even a "weak" AI thesis. Without the possibility of even "weak" AI, the "strong" AI issue is irrelevant. Pro also ignored the point I made about qualia in R3.

The resolution is negated.

Vote Con!
Debate Round No. 4
18 comments have been posted on this debate. Showing 1 through 10 records.
Posted by m93samman 6 years ago
m93samman
just balancing the votebomb
Posted by J.Kenyon 6 years ago
J.Kenyon
Thanks for the RFD, fatdan. Don't you have anything better to do besides stalk my debates and vote against me?
Posted by Demauscian 6 years ago
Demauscian
Thank you for your input.
Posted by Cerebral_Narcissist 6 years ago
Cerebral_Narcissist
Cleverbot is the worst conversation simulator I have ever encountered.
Posted by popculturepooka 6 years ago
popculturepooka
RFD:

Conduct and S & G: Tied.

Sources: J. Kenyon. As bluesteel said JK correctly attributed all of the philosophical thought experiments and such with their proponents.

Arguments: J. Kenyon. If I could sum up the JK's entire argument in one sentence it would be "qualia are a bear to deal with". And I think this is what JK correctly hit on and the problems with accounting for it in the hypothesized "conscious" AI. JK's counterpoints on the technological side were superb as well.
Posted by J.Kenyon 6 years ago
J.Kenyon
Uhh, Jcat, how did I lose conduct and sources?
Posted by Demauscian 6 years ago
Demauscian
I take that back, I thought I did.
Posted by Demauscian 6 years ago
Demauscian
I did define Consciousness
Posted by bluesteel 6 years ago
bluesteel
RFD:

J. Kenyon convinced me that Cleverbot is not a sign that AI may be possible since computing power may never be great enough. Pro spends a lot of time on the philosophical arguments, but not enough on rebutting the arguments about computing power. Lol, I liked the tree analogy.

No ones defines consciousness either, so it's hard to vote. J. Kenyon frames it as something intangible that is inherent to humans, but not to anything else. Without a counter-definition, Demauscian would have a really hard time winning. Does he need to prove that computers can emote? Can learn? I'm not sure what the BOP is. Kenyon essentially makes the BOP that computers must be human.

The arguments about the philosophical zombie seem silly. If I can't tell whether something is conscious, I would vote Con on presumption (since the whole debate is moot).

Sources - J.Kenyon; he correctly attributes each philosophical thought experiment to its originator and even sources the philosophical zombie for his opponent (Chalmers).
Posted by KatelynnChambers 6 years ago
KatelynnChambers
When machines are capable of mimicking human intellect so closely and so accurately,
one day, we humans will be asking ourselves a significant question,
"What is it that makes us 'human'?"

If machines can one day be designed to think and to reason, and imitate emotions on a chillingly accurate level, what will make human beings significant? What will set man apart from machine?

Think about it,
if it were possible to construct a highly sophisticated robot with a heated, flesh-like film over its internal mechanisms, and a perfectly realistic face, so that it could not be told apart from 'real' humans, what would separate this machine from humans? If it could move, think, speak, and react to stimuli EXACTLY as organic humans can, what would make humans unique?

A very interesting topic, brings to mind the japanese animated film, Ghost in the Shell, which explored a similar question. :)

Splendid work, both of you!
7 votes have been placed for this debate. Showing 1 through 7 records.
Vote Placed by m93samman 6 years ago
m93samman
DemauscianJ.KenyonTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:-Vote Checkmark-1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:07 
Vote Placed by LaissezFaire 6 years ago
LaissezFaire
DemauscianJ.KenyonTied
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:05 
Vote Placed by Jcat 6 years ago
Jcat
DemauscianJ.KenyonTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:Vote Checkmark--1 point
Had better spelling and grammar:Vote Checkmark--1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:70 
Vote Placed by fatdan33 6 years ago
fatdan33
DemauscianJ.KenyonTied
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:Vote Checkmark--1 point
Had better spelling and grammar:Vote Checkmark--1 point
Made more convincing arguments:--Vote Checkmark3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:20 
Vote Placed by popculturepooka 6 years ago
popculturepooka
DemauscianJ.KenyonTied
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:05 
Vote Placed by Demauscian 6 years ago
Demauscian
DemauscianJ.KenyonTied
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:--Vote Checkmark3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:00 
Vote Placed by bluesteel 6 years ago
bluesteel
DemauscianJ.KenyonTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:05