The Instigator
Pro (for)
3 Points
The Contender
Con (against)
0 Points

We Cannot Create Artificial Intelligence via Computation Alone

Do you like this debate?NoYes+3
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 1 vote the winner is...
Voting Style: Open Point System: 7 Point
Started: 10/21/2014 Category: Philosophy
Updated: 1 year ago Status: Post Voting Period
Viewed: 1,331 times Debate No: 63610
Debate Rounds (4)
Comments (33)
Votes (1)




I am challenging Philosophybro to this debate. I hope it will go well.

Artificial Intelligence (as defined by Searle): The claim that an "appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Computation: "process following a well-defined model understood and expressed as, for example, an algorithm, or a protocol." or to use Turing's definition of what a Turing machine does " Print "0;" erase" I;" print" I," erase "0;" move one square left; move one square right."


R1: Debate info and acceptance
R2: My arguments followed by Con's rebuttal. Con can present arguments if they wish, but it's not required.
R3: Rebuttals
R4: Rebuttals

Rules and Other Debate Information:
No forfeits.
No insults.
No semantics.
Follow the format.
BOP is on me.
72 Hours to Post Argument.
10,000 Characters Max.
2 week voting period.
7 point voting system.
Open Voting


Thanks! I had my first debate already but I hope this will be my first good debate.
Debate Round No. 1


Thanks philosophybro for accepting.

The argument I am going to present is centered around the notion that what computation is isn’t sufficient to produce a mind by positing a thought experiment where computation is done, but actual understanding doesn’t occur.

The philosopher John Searle while on a plane traveling to give a lecture on AI. Searle at this time had no experience in AI and was reading a book on the way there. The book confused him. He was very confused at the this idea that one could create strong AI via computation alone because computation doesn’t pass what Searle called “The Me” test. The me test involves one introspecting to see if something is sufficient for the creation of a mind. Or if the thing proposed can exist without a mind [1]. We can apply this test to AI via computation.

The Chinese Room

We must first notice computation is a shuffling around of syntax (moving around symbols). It’s just printing 0’s and 1’s , following algorithms, ect . If I can show syntax isn’t sufficient for semantics, then I have affirmed the resolution.

Let’s say we put a man in a room. In this room contains a book that tells him what to write if he comes in contact with chinese symbols. An example might be “If “什(636;是兔子” then reply “動物”. The man receives a piece of paper that says “什(636;是兔子” and properly follows the algorithm and replies “動物”. The man has followed this algorithm and is only engaged in syntax and yet there is no understanding of Chinese. The man is functionally equivalent to someone who speaks Chinese and can pass the Turing test, yet there is no semantics. A computer is analogous to the man. No matter how much or how well we program a computer, it is still operating under syntax which is insufficient for semantics.

Searle lays out the argument in a formal way [2].

(A1) Programs are formal (syntactic).

(A2) Minds have mental contents (semantics).

(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

(C1) Programs are neither constitutive of nor sufficient for minds.

A1 is true by definition.

A2 is self-evidently true.

A3 is known by our thought experiment and the conclusion follows.

Not much else to say, as my opponent said “Simple yet difficult to debate.”

I await your reply, the resolution is affirmed.

[1] The Teaching Company, Philosophy of Mind by John Searle Lecture 4 The Chinese Room Argument and Its Critics

[2] Searle, John (1984), Minds, Brains and Science: The 1984 Reith Lectures, Harvard University Press



N7 said I didn't have to post arguments for the creation of AI but it will make for a better debate :).

1: If it walks like a duck

n7's arguments are about a man that acts like he understands Chinese analogous to a computer that acts like a man. Check out the principle of things. If a computer acts intelligent speaks intelligent it is intelligent. If something walks like a duck, acts like a duck, and speaks like a duck we all assume it is a duck. Think about how we know others are intelligent or have minds? Because they act like they have them. I ask my debater if we cannot say someone has a mind because they act like it how do you know I have a mind or everyone else has a mind? You are caught in a dilemma.

1. We know others have a mind by how they act then we have to say AI via computation alone is possible. Or
2. We cannot know others have a mind by how they act then you cannot escape Solipsism.

2: The Moral Principle

Maybe one day we make a robot that acts like a human. We need to treat them like the have a mind. During times of slavery I read that people considered the Africans to be less because they weren't people. this lead to inhumane treatment. Creating a robot and making our culture think it is a stupid unintelligent program would be bad if they were intelligent. Its best to think of them as real intelligent persons in case they are.

3: The Chinese Room

First, I love your presentation :) not rambling not going too long. Straight and to the point. I don't recall who first made this response but I believe its a simple response that points out a flaw in the CR.

4: The System

The man doesn't understand the language but so what? The man is a part of a system. The man is a CPU at least I think thats what its called, I'm not an expert at computer science. The CPU not understanding a foreign language doesn't mean the system wont be able to understand the language. The system is the whole, the book, the room, and so on. I am not saying the room understands chinese, Iam saying it doesnt follow the system is unintelligent.

5: Connectionism

This was brought up a litttle in the comments. Connectionism is a different approach to AI. They try and simulate the neurons by making the computer computer like neurons. This gets around the chinese room but still is computation.

Connectionist computers can already recognize faces and objects and convert handwritten text to computer text

I believe this is the best way for making an AI.

6: Moore's law

Every 2 years computing power doubles. Physicists are working on computing using quantum mechanics. The CR looks to me to be short sighted.

Computing power is doubling and quantum computers are being worked on. You say we cannot create AI via these methods it looks to me thats a big big claim. We dont know whats around the corner in terms of technology. We have to accept the possibility of AI. The CR cannot predict the future of computing that is always growing

Thank you n7 for a good debate :)
Debate Round No. 2



Con’s Arguments

Argument for Functionalism

Con presents an argument based on the intuitive notion of functionalism. If one acts like they have a mind, they must have a mind. First, this is only a Prima Facie argument. The Chinese Room argument attempts to refute this principle, so if it succeeds this argument fails.

However there are other ways of refuting this argument. The inverted color spectrum thought experiment does so. Imagine two people were looking at the same color, but one of them sees color inverted. The two people would be functionally the same. They call the color by the same name. They receive the same inputs and produce the same outputs, but they are not having the same mental states. They both quack like ducks, but one of them isn’t a duck. Not only is this a priori possible, it is a posteriori possible. Color blindness works because our we have 3 cone cells for each wavelength of color. Long, medium and short wavelength processing ganglion cells. When one is color blind to long wavelength color their long wavelength ganglion cell is replaced by another medium wavelength ganglion cell and visa versa [1]. This means it is possible for one to have a medium wavelength cell in their long wavelength area and a long wavelength cell in their medium making those colors inverted. The freaky thing is you or I could have this condition and never know it, our entire understanding of color could be inverted. We would be functionally and behaviorally the same, yet be different.

Con commits a bifurcation fallacy as there is a third option to the dilemma. If one accepts that brains cause minds, then if someone has a brain or something has the same causal power as a brain, they would have a mind. This is known regardless of behavior or function.

The argument fails

The Morality of AI

This argument is irrelevant and commits the fallacy of appealing to consequences. We are debating the descriptive question if we can create AI via computation alone, not the prescriptive question of how we should treat them. I can easily accept the idea that a robot should be treated kindly yet believe it’s not conscious without contradiction.

Defense of the Chinese Room Argument

The Systems Reply

First, if the room doesn’t understand the language, then what does? The room is functionally equivalent to a man (or a room) who understands Chinese. According to your first argument, if it walks like a duck it is a duck and if the man doesn’t understand Chinese you have to concede that something does. Early defenders of the systems reply did concede that the room understood Chinese

“When I first heard this [the systems reply] I was in a debate and I said to the guy that presented it “You mean the room understands Chinese” and the guy said “Yes the room understands Chinese”. Well I admire the courage” [2] - John Searle

One of the first defenders of the systems reply Ray Kurzweil also conceded this

“....if the system displays the apparent capacity to understand Chinese “it would have to, indeed, understand Chinese” [3]

This shows the absurdity of the rebuttal, but doesn’t point out what’s wrong with it. The real problem with the systems reply is that it seems the physical room and book is irrelevant. Have the man memorize the book and walk out of the room. The man is now the system, yet there is still no understanding of Chinese.

Connectionist reply

Before I go any further, I will quickly explain the Church-Turing thesis. In simple terms, a Turing machine can be realized in a variety of ways (water pipes, notches, cats and mice are some examples) and one Turing machine can do the job of another [4].

By Church’s thesis a connectionist machine can do the same job as a normal Von Neumann

machine, but the connectionist machine is faster because it processes its information in parallel. Searle has talked about using connectionist programs on the normal Von Neumann computers at Berkeley because connectionist machines were too expensive [5].

My point here is connectionism is only faster than using a Von Neumann machine, it doesn’t add any more computing power other than that. Connectionism is therefore not exempt from the CR.

Then the argument boils down to “What if we simulate the neurons on the computer?”. Simulating neurons on a computer will only get to a model of the brain, not a mind itself. For example, if one wanted to make a machine that digests pizza, nobody would suggest we simulate digestion on a computer because that would just be a model. Making a model of a brain on alcohol doesn’t make the computer drunk. To better understand, recall the Church-Turing thesis. We can create a Turing machine out of anything. If we were to use water pipes, we can say if a pipe is running that is a 1, if it’s not that’s a 0. If we have tons of water pipes that are positioned in a way to do the same computational algorithm that simulates being drunk, the water pipes aren’t drunk. The only thing we would get is a model, not actual mental content.

Wait Till Next Year reply

Con talks about the progress of computing and claims it is too much of a claim to state AI is impossible. This rebuttal strawmans the argument. The first axiom of the formal argument is

“(A1) Programs are formal (syntactic).”

This isn’t a claim about any current state of technology, it’s a claim about its definition. The Chinese room attempts to show computation cannot create a mind by the very definition of computation. The next step in faster computing or quantum computing is irrelevant because it’s still computation which is still by definition syntactic.

The Chinese Room argument remains standing. My opponents argument against the resolution and his objections against my argument fail.

Back to Con



[2] The Teaching Company, Philosophy of Mind by John Searle Lecture 3: Strong Artificial Intelligence

[3] Richards, J. W. (ed.), 2002, Are We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI,


[5] The Teaching Company, Philosophy of Mind by John Searle Lecture 5: Can A Machine Think?



Very nice replies. I hope my responses will be equal.

1: Other minds

n7 says he can refute this argument by showing it is possible for two people to see different colours. "They both quack like ducks, but one of them isn’t a duck." This doesn't work; they both still have minds. We can't know what the see but we know that they do see. Sure we wont know how the AI thinks or how it sees. Maybe it sees in other colours like in the inferred spectrum but we can know it does see.

n7 says " If one accepts that brains cause minds, then if someone has a brain or something has the same causal power as a brain, they would have a mind."

I can respond to this the same way n7 responded to me. What if there was a brain that didn't have a mind? A philosophical zombie would be like you in everyway but wouldn't have a mind. Searle's biological naturalism does not escape solipsism.

2: Morality

At a second look my opponent is right about this argument. Disregard it.

3: The System response

A man out of the room can still be a part of a bigger system. The environment outside. AI researcher Jack Copeland says something within the man understands Chinese. When someone throws a ball at you something in your mind understands where to go and how to calculate where it is going to go.

4: Connectionist response

Maybe connectionism doesn't add any new computing power but it uses it in a different way. Instead of moving symbols we simulate the neurons in the chips. Not on a computer but in the computer chips themselves. Im not sure how to make this make more sense. Computation would help it think but its mind would be in the computation of the chips simulating the neurons.

5: Future of computing

It is still by definition computation but I am saying something similar to the connectionist argument. New technology can come around to simulate the neurons

Debate Round No. 3



Con’s Arguments

Argument for Functionalism

Con misunderstands my argument here. The fact that they both have minds is irrelevant and misunderstands what Con’s own argument implies. My argument was in a modus tollens format.

1.If Con’s argument is correct, Functionalism is true.

2. Functionalism is false

C.Con’s argument is false

If we know others have a mind based on how they function then functionalism would be true [1]. Since others can function the same yet not have the same experience, functionalism is false. Con is trying to ignore the implications of his argument. The two people may have a mind, but we cannot say they have a mind by their functions which is the very soul of the my opponent's argument.

Con then attacks Searle's biological naturalism by appealing to P-zombies. My goal in this debate is not to defend biological naturalism, but to defend my initial argument and refute Con’s arguments. I can therefore disregard biological naturalism and agree Chalmer’s zombie argument is sound. This creates a bigger problem for Con. The P-zombie argument refutes his very argument. If it is possible for there to be a zombie n7 that functions exactly like me, but is not conscious, then we cannot say people have a mind based on their function and Con’s whole argument has been refuted by a point he has brought up.

In order for Con to defend his argument he must argue against Chalmer’s argument which then opens up the spot for biological naturalism.

The argument is definitely refuted.

The Morality of AI

Con concedes here.

Defense of the Chinese Room Argument

The Systems Reply

Con ignores my argument that he must concede an inanimate room understand Chinese. He then claims the man is a part of the environment's system. So does the Earth itself understand Chinese? How is the environment playing any part in the system? The man is not interacting with them at all. What if we send the man to space? Would spacetime itself understand Chinese? Would the universe? How far are strong AI advocates willing to go?

If you’re really going this far, let’s remove the physical. We can imagine the man became disembodied. He is no longer in any system. He knows how to respond to Chinese, yet still doesn’t understand it.

Con then appeals to Copeland’s idea of a hidden mind. Looking at this prima facie, the idea of a hidden second mind is absurd and ad hoc. Even if I assume there is a hidden second mind, we would have to note there must first be a primary mind. A hidden second mind isn’t conscious and isn’t like a mind that humans have. You would first need some defense for computers having a primary mind which is the very thing you’re trying to do.

The systems reply fails. Its defense has become absurd. We would have to accept so many absurd conclusions like conscious inanimate objects and hidden secret minds within minds. Even ignoring the absurd, they are still false. A second mind isn’t sufficient or a primary mind and we can conceive of the man being disembodied.

Connectionist reply

It seems my opponent has stopped arguing for AI via computation alone and started arguing for using the material of computer chips to create an AI. If we were to simulate the neurons it would either by a model of neurons or it wouldn’t be computation anymore. If Con is still advocating for connectionist computation he has yet to respond to my reason why a simulation with computation must be a model and not a mind.

Wait Till Next Year reply

Con has seemingly dropped this argument. He seems to have reduced the rebuttal to his last one. Which still has the same problems which have went unaddressed.


I find both of my opponents arguments fail. The first argument fails because functionalism fails. Con himself defended the zombie argument which is a contradiction of his position. Con agrees his other argument fails. Con’s objections fall flat too. If we are to accept the systems reply we have to accept so many absurdities. Con never responded to my argument that a computational simulation would be a simple model we instead argues we have to emulate the causal power of the brain. Which is true, but then it’s not computation. His last argument falls back on his connectionist argument

My arguments remain standing.





1: Other Minds

Like n7 I am not bound to any specific philosophy of mind. No need to hold religiously to functionalism. I can swap it out for a view that we know others have a mind because of the functions they do. I defend a broad functionalism not a functionalism that says someone has to have the specific mental event because of their inputs and outputs.

The zombie argument is to attack materialism that says the mind is the brain. I think functionalism is true but I also think dualism is true because functionalism doesn't make you pick any specific philosophy. I dont claim that the mind is identical to a function and I can accept the zombie argument with no contradictions.

2: A system

No. I am not saying any inanimate thing understands anything. I said before "I am not saying the room understands chinese, I am saying it doesnt follow the system is unintelligent.".
Copeland's argument isnt ad hoc or problematic. We have a subconscious mind. n7 would have to say Freud's theory is ad hoc and problematic. Copeland is talking about a small extension of Freud's idea.

3: Connectionism & Moore's law

I combined these because they are alike

n7 hasn't posted an argument for why connectionism cant do both computation and emulate a neuron. It doesn't seem mutually exclusive. I think my rebuttal still applies because of that.

I applaud n7. He is good and I have learned a lot from this debate. I wish him luck in the voting :)
Debate Round No. 4
33 comments have been posted on this debate. Showing 1 through 10 records.
Posted by n7 1 year ago
I can send a challenge tonight.
Posted by Philosophybro 1 year ago
Id love to see it you seem to know a lot more about connectionism than me.
Posted by UndeniableReality 1 year ago
I would be willing to debate this with n7.
Posted by Philosophybro 1 year ago
I understand your vote. N7 is good. I want to see him debate this with someone else to see how well another would do.
Posted by Philosophybro 1 year ago
Sorry I couldn't do better. This was difficult.
Posted by UndeniableReality 1 year ago
Very nice debate topic! I'm glad it was taken seriously.

N7, you did very well for someone who is not a researcher in artificial intelligence or neuroscience. Well done. I look forward to your future debates.
Posted by UndeniableReality 1 year ago
Hmm I wish I could reply to Pro =P

If I have time later, I would love to debate this topic. It's very closely related to my area of research.
Posted by Philosophybro 1 year ago
My responses will bring up those points you wont have to wait long.
Posted by UndeniableReality 1 year ago
I'll also refrain from commenting too much while the debate is ongoing. I look forward to the rest of the debate, and I do think you are doing well.
Posted by n7 1 year ago
You don't have to apologize for disagreeing with me. Searle may be a philosopher, but that's fine because the concept of AI extends into the philosophical realm. Searle does respond to connectionist attacks as well as the idea that new research will give us some new breakthrough. But I will wait to see what my opponents arguments are until I respond to those.
1 votes has been placed for this debate.
Vote Placed by Imperfiect 1 year ago
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:30 
Reasons for voting decision: I will allow con to pm me if he believes this is a vote-bomb but i'm just gonna keep this really short as writing an essay would be futile. Con literally dropped everything that pro threw at them and tried to just nullify what scraps were left in last round... That's how I'd put it, if Con wants specifics I will explain to them but it's very complicated.