The Instigator
Pro (for)
6 Points
The Contender
Con (against)
18 Points

We Cannot Create Artificial Intelligence via Computation Alone

Do you like this debate?NoYes+2
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 5 votes the winner is...
Voting Style: Open Point System: 7 Point
Started: 11/2/2014 Category: Philosophy
Updated: 1 year ago Status: Post Voting Period
Viewed: 3,406 times Debate No: 64393
Debate Rounds (4)
Comments (101)
Votes (5)




After some discussion in my past debate, UndeniableReality would seem to be a good opponent to debate this with.

Artificial Intelligence (as defined by Searle): The claim that an "appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Computation: "process following a well-defined model understood and expressed as, for example, an algorithm, or a protocol." or to use Turing's definition of what a Turing machine does " Print "0;" erase" I;" print" I," erase "0;" move one square left; move one square right."


R1: Debate info and acceptance
R2: My arguments followed by Con's rebuttal. Con can present arguments if they wish, but it's not required.
R3: Rebuttals
R4: Rebuttals

Rules and Other Debate Information:
No forfeits.
No insults.
No semantics.
Follow the format.
BOP is on me.
72 Hours to Post Argument.
10,000 Characters Max.
2 week voting period.
7 point voting system.
Open Voting


Thank you for setting this up. Your previous debate was very well done, and I look forward to debating someone as skilled as you are on a topic which I find very interesting personally.

I would like to place one additional requirement on the debate, if possible. The purpose is to avoid this turning into a trivial debate due to the wording of topic. There is a trivial case where pro is correct. Given a system which is capable of intelligence, if the system receives no input (sensory experience or information) for the duration of its existence, it will never become intelligence. Just as if a human were to be isolated in sensory deprivation from birth, it would never become intelligent. Given that this is not what we intend to debate, I will move on to definitions.

I will accept your definition of Computation, which is copied directly from Wikipedia [2]. However, I am not completely in favor of your definition of Artificial Intelligence. On first glance, it requires a definition of 'mind'.

Defining 'mind' is not a trivial task. There is some minimal subset of the following characteristics that must be expressed in an entity in a way which is observable for us humans to behave as if they had a mind [1]:
Thought " the process of consideration or reasoning about something
Perception " the ability to interpret information that is received through some set of senses
Memory " the ability to store and recall previously perceived information with some level of accuracy
Will " the ability to have choice or intention
Emotion " instinctive of intuitive states of mind which may be distinguishable from reasoning or knowledge and which are derived from one"s experience
Imagination " the ability to form ideas, images, or concepts which are not present to the senses
Creativity " the use of imagination to form novel ideas, images, or concepts using existing ideas, images, or concepts
All above definitions are paraphrased and searchable in [1]

This is not to say that all of the above characteristics are required for an entity to have a mind. I doubt if we came across an alien species who exhibited all of the above characteristics except for, say, emotion, that we would reject the notion that they had minds. However, we expect that a mind would have some sufficient subset of these characteristics for us to recognize it as a mind, but beyond that, we are not yet able to precisely define what is or is not a mind.

What has been defined above would more accurately be called "Artificial Cognition", or "Artificial Consciousness". A more widely accepted definition of Artificial Intelligence is an intelligence which has been designed and implemented on a machine or in software [3]. Intelligence can be defined in terms of an entities capacity for the following characteristics [4]:
Logic " the use of valid reasoning (the capacity for consciously making sense of information, applying logic, establishing and verifying facts, changing or justifying actions and beliefs based on new information, etc.) [5]
Abstraction " the generalization of rules and concepts from specific examples [6]
Understanding " the ability to utilize abstracted knowledge to support intelligent behavior, e.g. prediction [7]
Self-awareness " the abilities to introspect and to recognize oneself as separate from the environment and others [8]
Communication " the conveyance of information
Learning " the ability to acquire, modify, and reinforce information, behavior, skills, etc. [9]
Memory " as above
Creativity " as above
Problem Solving (and planning) " the ability to develop and execute a method or set of actions for achieving a defined goal given a current condition [10]

While these two definitions are related, they are not exactly the same. I am not sure that either of our arguments will be substantially changed by usage of one definition or the other. It is not my intention to sabotage the debate by changing definitions, but we must be more precise with our definitions in order to have a productive debate.

I would be happy with a combination of these definitions, if you do not have a particularly strong preference. However, since the title of the debate is immutable, I would lean towards using a definition more similar to that of Artificial Intelligence, rather than that of "Artificial Cognition" or "Artificial Consciousness".

I hope my points and concerns regarding definitions are well-taken and not misunderstood. I hope for a good debate, and I would like to be carefully aware of exactly what we will be debating.

As you have accepted the BoP, I will intend to show that you have not met your BoP, and may also present arguments for the potential to achieve artificial intelligence using purely computational means (including the input of data).

All the best.

[3] Poole, D., Mackworth, A., & Goebel, R. (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press.
[5] Jacquette, D. (2002). A companion to Philosophical Logic. Wiley Online Library. p 2
[6] Suzanne K. Langer (1953). Philosophy in a New Key p. 90:
[7] Chaitin, Gregory. (2006). The Limits of Reason
[10] S. Ian Robertson, Problem solving, Psychology Press, 2001, p. 2.
Debate Round No. 1


Thanks. I don’t find any problem with the definitions of mind, however any definition must take into account that it’s exactly the same as a human being’s mind in the sense that it possesses phenomenal consciousness [1] or in other words possess intentionality [2]. As long as these are included, Con’s definitions are accepted.

The argument I am going to present is centered around the notion that what computation is, isn’t sufficient to produce a mind by positing a thought experiment where computation is done, but actual understanding doesn’t occur.

History of the Chinese Room Argument

The philosopher John Searle while on a plane traveling to give a lecture on AI was reading a book on the subject of researchers trying to create a mind by making a computer answer questions about said story as if it understood the story. He was very confused at the this idea that one could create strong AI via computation alone because computation doesn’t pass what Searle called “The Me” test. The me test involves one introspecting to see if something is sufficient for the creation of a mind. Or if the thing proposed can exist without a mind [3]. We can apply this test to AI via computation. When we do, we find it fails. Searle was certain he missed something, yet the responses at the conference were varied, he found no answer, and was unconvinced of the proposition.

The Chinese Room Argument

We must first notice computation is a shuffling around of syntax (moving around symbols). It’s just printing 0’s and 1’s , following algorithms, ect . If I can show syntax isn’t sufficient for semantics (mental content), then I have affirmed the resolution.

Let’s say we put a man in a room. In this room contains a book that tells him what to write if he comes in contact with chinese symbols (I know the chinese symbols will probably not appear completely, but bare with me). An example might be “If “什(636;是兔子” then reply “動物”. The man receives a piece of paper that says “什(636;是兔子” and properly follows the algorithm and replies “動物”. The man has followed this algorithm and is only engaged in syntax and yet there is no understanding of Chinese. The man is functionally equivalent to someone who speaks Chinese and can pass the Turing test, yet there is no semantics. A computer is analogous to the man. No matter how much or how well we program a computer, it is still operating under syntax which is insufficient for semantics.

Searle lays out the argument in a formal way [4].

(A1) Programs are formal (syntactic).

(A2) Minds have mental contents (semantics).

(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

(C1) Programs are neither constitutive of nor sufficient for minds.

A1 is true by definition.

A2 is self-evidently true.

A3 is known by our thought experiment and the conclusion follows.

Not much else to say, the argument is very simple, yet is hard to argue against.

I await your reply, the resolution is affirmed.



[3] The Teaching Company, Philosophy of Mind by John Searle Lecture 4 The Chinese Room Argument and Its Critics

[4] Searle, John (1984), Minds, Brains and Science: The 1984 Reith Lectures, Harvard University Press



As you have stated in your outline of the debate, I am required to rebut your argument in this round and I have the option of making an argument if I wish. I will do both of these.

I would also like to take this opportunity to request that you please define ‘syntactic’ and ‘semantic’, as they appear to be central to your argument (or rather we should say Searle’s argument, since you have used it verbatim from Searle’s 1990 publication [1] and mistakenly attributed it to his 1984 publication).

Rebuttal 1: A counter-example does not confirm the notion

As Pro has stated at the top of his argument, Pro is attempting to show that computation is not sufficient to produce a mind by illustrating a hypothetical example where computation mimics a particular expression of intelligence (language comprehension), but does not result in true understanding, or intelligence. I would not agree that Pro has succeeded in showing this. However, if we assume for the moment that Pro has, and if we are precise in describing what pro has shown, we can summarize it as follows:

(P1) There are examples of computational systems which mimic some aspect of human intelligence without producing true intelligence
(P2) The existence of such examples is sufficient to show that computational systems cannot produce intelligence
(C1) Therefore computational systems cannot produce intelligence.

P1 is true, but the conclusion does not follow from the premise because P2 is false. Though Pro has succeeded in providing a counter-example to the notion, a counter-example does not confirm that artificial intelligence cannot be created from computation alone.

Therefore, while I will rebut Searle’s CRA, I do want to point out that it is not actually necessary, as far as I can tell (I accept that a reformulation of the argument may reinsert relevance of the CRA to the notion). Searle’s CRA, even if in the future is definitively shown to be true, does not disprove that intelligence, or even artificial intelligence, cannot be produced from computation alone. It proves only that the CRA is not a successful implementation of an artificial intelligence.

Rebuttal 2: The room is not a computational intelligence (strawman of the CRA)

The point of the CRA is that it presents a system which exhibits what would seem like intelligence based on computation alone, but when it is revealed how the system actually operates, it should become intuitively obvious that the system itself is not intelligent (this has been attacked as an appeal to intuition fallacy in the CRA [7,8]).

However, the system does not exhibit properties of intelligence in the first place, as it is an entirely syntactic system. The room, as you have presented it, is only able to respond in one given way depending on the input, which it must have a preprogrammed response for. It cannot respond to novel inputs, change its responses over time, or interpret two different sentences to have the same meaning.

Since we have programs which can actually model semantic information [3,4,5] in language and generate novel responses to novel questions, as well as change its responses over time, (A1) in Pro’s post is a questionable premise, and the CRA as Pro has presented it is no longer relevant, even if we do not realize it is a strawman to begin with.

By Pro’s own definition of a program (which the CRA is meant to be an example of), there is no semantic information processing. Therefore, there is a contradiction in the CRA, since if (A1) in Pro’s argument is true, and programs are only syntactic by necessity, then the room in the CRA cannot give meaningful responses to most inputs in the first place, and all it shows is that a syntactic program is a syntactic program.

Rebuttal 2 can be summarized as follows

(P1) The CRA attempts to show that simulating intelligent behavior via computation cannot produce intelligence
(P2) The CRA, as presented by Pro, does not simulate intelligence
(C1) The CRA does not show that simulating intelligent behavior via computation cannot produce intelligence

Rebuttal 3: Unrefuted Replies to the CRA

Here I will simply point out that there are several currently unresolved debates regarding the CRA [6]. By this I mean that the replies have not been sufficiently refuted to show that they do not disprove Searle’s CRA, and therefore the CRA has not yet been proven valid. Each of the replies has several versions which the CRA has not yet been able to satisfactorily dismiss. There are several, but I will discuss mainly the brain simulation / connectionist reply.

Rebuttal 3 is summarized as follows:

(P1) Pro has based his argument on the CRA
(P2) To meet his BoP, Pro must show that there is no valid falsification of the CRA
(P3) There is currently a list of replies to the CRA which have yet to be refuted or dismissed
(C1) The CRA is insufficient to meet Pro’s BoP

Rebuttal 4: Connectionist Reply

The connectionist reply is essentially that a sufficiently accurate simulation of a brain (X), or a sufficiently complex computational network with similar properties and functions as the brain (Y), could be called intelligent.

(P1) If either X or Y is possible, then artificial intelligence via computation is possible
(P2) Neither X nor Y has been proven or disproven
(C1) We cannot conclude that artificial intelligence via computation is not possible.

Furthermore, the progress towards AI in connectionist models has been staggering from the late 80’s onwards [2]. Many functions of intelligent agents have been simulated with algorithms using computations which are not formally defined (i.e., the functions of the algorithms are learned, not defined [9,10]), including object recognition [9,10], natural language processing (semantically) [3,4,5], and much more.

Argument: Human Intelligence is Computational

The current paradigm in neuroscience, supported by decades of neuroscience research and computational simulation [9,10,11,12], is that the brain is a computational network of computational units called neurons. Neurons can be accurately simulated as binary output units which compute non-linear functions of their summed inputs weighted by the strength of their synaptic connections to input neurons, and learning can be simulated through algorithms which change these synaptic connections by simulating the dynamics information propagation in the brain. Even complex functions of human intelligence have been simulated using such computational networks, such planning, imagination, and creativity [13], and scientifically, we have yet to find evidence for anything other than computation that is involved in the production of intelligence in the brain.


[1] Searle, John (1990) Is the Brain’s Mind a Computer Program?. Scientific American 262 (1): 26-31.
[2] Cole, David (2004) The Chinese Room Argument, in Zalta, Edward N., The Stanford Encyclopedia of Philosophy.
[3] Elci, A, Kone, M, Orgun, M. (2011) Semantic Agent Systems. Springer Publishing.
[4] Shuklin, D E, (2004) The Further Development of Semantic Neural Network Models. Artificial Intelligence. 3: 598-606
[5] Cambria, E, White, B. (2014) Jumping NLP Curves: A Review of Natural Language Processing Research. IEEE Computational Intelligence Magazine. 9(2): 48-57.
[7] Crevier, D (1993) AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks.
[8] Dennett, D (1991) Conscoiusness Explained. The Penguin Press.
[9] Haykin, S (2009) Neural Networks and Learning Machines. Pearson Education.
[10] Tang, Y, Abdel-rahman, M (2012) Multiresolution Deep Belief Networks. International Conf on AI and Statistics
[11] Churchland, P, Sejnowski, T J (1992) The Computational Brain. Computational Neuroscience. The MIT Press.
[12] Baron, R (2013) The Cerebral Computer: An Introduction to the Computational Structure of the Human Brain. Psychology Press
[13] Byrne P, Becker S, Burgess N (2007) Remebering the past and imagining the future: a neural model of spatial memory and imagery. Psychol Rev

Debate Round No. 2


Thanks! I defined syntax in the last round as symbol processing and semantics as mental content or meaning. However if we wish for other definitions I will present them.

Syntax: A set of rules for or an analysis of this [1].

Semantic: Relating to meaning in language or logic [2].

Does the CRA affirm the resolution?

Here Con argues the CRA is only one example of a computational system. If it succeeds this doesn’t entail all computational systems are not intelligent. The problem with his argument is it strawmans the Chinese Room.

Con summarizes the argument in a quite fallacious form, but this isn’t the argument. The argument isn’t attempting to disprove strong AI because one setup that mimics a computer isn’t intelligent. It’s attempting to disprove strong AI because the example uses the fundamental nature of computation. Not that simply one system wouldn’t be intelligent, but because this system reflects the very nature of computation.

Is the CRA computationally intelligent?

Con makes a small objection at the beginning of his sentence. That the CRA was attacked as an appeal to intuition. An appeal to intuition is “an argument that because a proposition does not match one's experience of how things work in general, or how we believe they should work, then that proposition is not true.” [3] However, as I said before the CRA is based off the definition of a computation. Examples given are things like “Our intuition says that continents do not move, since we do not notice any such movement, but it turns out that they do in ways that have produced enormous movements over geologic time.”[ibid] This is something that is a posteriori, yet the CRA is deductive, relying on the definition of computation.

His further objection is about the things the Chinese Room can’t do that computers can. However this misunderstands the Chinese room. The man doesn't only need to respond to text; there is no reason why the rule book can’t have him do other actions. The book can say “ if xyz comes up, then knock three times and snap his fingers once”. Since actions can be implemented, so can the action to make a new rule. If the characters “我的名字是 come up, then copy what is after those characters and reply that whenever “我叫什么名字” comes up. As for it saying two different sentences mean the same thing, there’s no reason why the Chinese room couldn’t do this. The book can tell the man to reply in a way that says they mean the same.

The examples are still symbol processing. The objection misunderstands the Chinese room and thus fails.


I’m sorry, but this objection is really bad. It fundamentally has nothing at all to do with the CRA or AI and is a misunderstanding of the burden of proof. It’s hardly a rebuttal. It is my job to present the argument and Con’s job to present rebuttals and for us to go back and forth with the rebuttals until the end. This is the format of the debate agreed upon. It is not my job to respond to every single reply in existence to fulfill my BOP. If Con’s objections have been answered, then the BOP has been fulfilled. Con’s view of the BOP is an impossible to meet as DDO only has a certain amount of characters. So even if he was correct, we would have to still have to do what’s described above. I doubt my opponent would find it sufficient to show I have met my BOP by showing there are unreplied replies to his objections and I doubt it would be sufficient to show that his argument in favor of AI is unsound because there is unsolved debate on the subject.

This argument is a Gish Gallop [4] and misunderstands how the debate works.


Here Con says a brain stimulation may create an AI. However, simulating the formal structure of the brain is just that, a simulation. For example, Ned Block proposed a Chinese nation, where everyone in China must simulate every part of an English speaker’s brain through walkie talkies and flags [5]. Only the formal structure is simulated, not the actual mind. Furthermore, this doesn’t get around the Chinese room. Instead of one man inside a room, we could have many men inside a gym. All receiving different commands in one area of the gym and given different tokens to act as weights for adjustment. Via trial and error, they can get many things correct. They act like a connectionist system, yet there is still no understanding of Chinese.

Con’s Argument

Con argues AI via computation is possible because human intellect is computational. The initial problem I see with this argument is that it’s a fallacy of composition [6]. Just because one property of the mind can be simulated via computation doesn’t entail the mind itself is a computer. Even if I were to go as so far to concede parts of human intellect is computational, it still doesn’t follow that the human intellect as a whole is computational. Which could make problem solving via computation work, but not actual conscious problem solving.

Furthermore, our brains don’t seem to work like a connectionist system. Connectionist machines learn via error backpropagation [7], but there is no evidence that suggests brains learn this way [8]. It’s also unclear that the connectionist machines can account for the systematic nature of the mind. Fodor and Pylyshyn pointed out someone who understands a phrase like “N7 speaks to UndeniableReality” can understand the phrase “UndeniableReality speaks to N7”, yet a connectionist machine cannot understand the latter solely because of the former [9].

What I believe the biggest problem is the fact that anything can be a computer. It is first important to understand the difference between intrinsic and observer dependant entities. Something which is intrinsic is something such as mass, force, and gravity. Basically it’s something within the nature of the entity itself. Something which is observer dependent is something like marriage, money, and government. They exist because we give value to them.

The Church-Turing thesis shows any real world example computation can be also computed on a Turing machine [10][11]. Computation can be carried out on water pipes, notches, and Ned Block had an example of it being carried out with cats and mice [12]. The important question is, in virtue of what are these things computers? There is nothing about the properties of the pipes, or animals which make them computers, but because we assign a computational interpretation to them. Computation is assigned, not discovered in nature. So is the brain a computer? It’s a trivial yes, but it isn’t meaningful because no physical system is intrinsically a computer. The position isn’t coherent [13].

Con’s computational theory of the mind is filled with many problems. It’s conclusion doesn’t follow from its premises, connectionist machines don’t act like brains, and a computer isn’t something that is intrinsic in nature. We also see his first rebuttal and second rebuttal misunderstand the CRA, his third rebuttal misunderstands the parameters of the debate, and his fourth rebuttal doesn’t escape the CRA.







[5] Block, N. 1978. “Troubles With Functionalism”



[8] The Great Courses. Philosophy of Mind: Brains, Consciousness, and Thinking Machines Lecture 16 “Brains and Computers”.

[9] Fodor, J., and Pylyshyn, Z., 1988, “Connectionism and Cognitive Architecture: a Critical Analysis,”



[12] Block, N. 1995 “The Mind as the Software of the Brain”

[13] The Teaching Company, Philosophy of Mind by John Searle Lecture 6



Thanks for an excellent set of rebuttals! I’m glad that this will be challenging.


Pro’s definition of syntax is misquoted from the source and thus incoherent. The full definition is

Syntax: A set of rules for the arrangement or analysis of words and phrases.

Pro’s definition of semantic is too vague, as it does not even omit syntax. I expect Pro will agree with

Semantic: the relationships among potentially syntactic objects and what they denote [2]

I wish to add

Computation: the execution of an algorithm [8]

1) The CRA does not affirm the resolution

The BoP is on Pro, which means that it is up to Pro to prove the resolution that AI cannot be achieved through computation alone. The resolution is not “the CRA cannot be disproved”. The most direct way to meet the BoP is to provide evidence indicative of any aspect of intelligence that is not computational. But Pro’s entire argument relies on the CRA, so Pro must show that the CRA is both correct and sufficient for proving that AI is not possible via computation.

2) The CRA is not correct or sound
a. False equivalence

Pro states that the man in the room is functionally equivalent to someone who speaks Chinese. This is a false equivalence to the entire system, because the man is just a part of the system and it is the whole system that is meant to appear to speak Chinese. Pro would have to prove that the system doesn’t speak Chinese, which contradicts the argument.

b. The CRA is self-defeating

If A1 is true (programs are only syntactic) and A3 is also true (semantics can’t be derived from syntax), then the system cannot simulate semantic understanding in the first place (it wouldn’t pass the Turing Test) [1].

To pass the Turing test and simulate understanding Chinese, the system would have to consistently output semantically correct responses. Therefore it must be able to distinguish between semantically correct phrases (A) and syntactically correct phrases that are semantically incorrect (B), and also incorporate new syntax in a semantically correct way.

Distinguishing between A and B would require semantic understanding. Otherwise the system wouldn’t be able to see the absurdity in “The pizza was so hot it screamed”.

To integrate new words, the system would need to be able to learn new words how to use them in semantically correct ways. Therefore it would have to understand how a new word relates to other syntactic and semantic objects, requiring semantic understanding.

If the system overcomes either of these, it has semantic information processing somewhere. If it distinguishes between C and I by having a list of each (this can’t be done in a finite time), then the author of the paper that the man receives infuses the semantic understanding. If the system can correctly incorporate new syntax, then the author of the paper must incorporate the semantic relationships between new words and previous words.

Therefore, we are left with only three options:
i. If the CRA presents a system that passes the Turing test, then the system is able to process semantic information, thus violating (A1)

ii. If pro shows that (A1) holds and the system passes the Turing test (produces consistently correct semantic output) then (A3) is violated, since the system achieved semantic coherence through syntax alone
iii. The system does not pass the Turing test, making the entire basis of the CRA moot with respect to proving the resolution that AI cannot be achieved via computation

c. Premise (A1) is false

Programs are capable of achieving semantic coherence through computation. Computational semantics incorporates semantics into computation so that natural language programs and computational reasoning can make use of semantic representations and perform correct semantic inference [3]. One example extracts meaning from text, builds semantic representations, and performs semantic inference [4].

Thus it is impossible for the CRA to prove the resolution, as it itself is not proven nor sound.

3) The CRA is insufficient

Pro’s CRA fails to take into account progress in AI and computational neuroscience since the mid 80’s. Intelligent systems exist which both learn and extend beyond syntax so Pro’s CRA does not represent all computational systems or reflect the nature of computing.

a. Computational systems can learn

The CRA fails to represent learning, and therefore does not represent all computational machines. As Pro agrees, computational systems are capable of learning, though Pro may not be aware that this extends to semantics [5,6,7]. The CRA has no mechanism by which to learn, and therefore the CRA does not represent all programs or the nature of computation.

b. Connectionist models and brain simulations

Pro rejects the notion that an accurate brain simulation can result in AI because it only simulates the formal structure of a brain. This is incorrect. Brain simulations and connectionist models also simulate the dynamics, plasticity, and abilities of the brain [5,6]. Thus the refutation is not grounded in a true understanding of connectionist models and brain simulations.

Modern connectionist models are not even programmed to perform certain computations or solve certain problems. They learn instead. Deep Belief Networks learn semantic representations and perform semantic inference [6]. They are also able to imagine new semantic concepts based on what they are already familiar with (music composition [7]).

Together with 2c, this means the CRA is merely an example of a computational system and Rebuttal 1 from Round 2 holds.

Argument: Human Intelligence is Computational

The argument isn’t that aspects of human intellect can be simulated via computation and therefore it is computational. The argument is that biological neural networks are computational systems which compute algorithsm and the brain is simply a large neural network and neural networks are computational systems [9,10]. Therefor this is not a fallacy of composition.

The rebuttal against connectionist machines is based on a misunderstanding. Backpropagation is only how certain connectionist models learn, and these models are not meant to simulate brains. Backpropagation is used to learn classification [5]. Other connectionist models are more concerned with being biologically plausible [11].

Pro’s final rebuttal is that computation isn’t inherent to natural systems and is just in our interpretation. Since humans can compute, there must be at least some computation in happening in our brains. There are also several fields of science devoted to studying computation in nature [12]. However, what makes our brain computational is that all of the aspects of intelligence we identified is explainable by neural computation running algorithms.



[3] Blackburn & Bos (2005) Representation and Inference for Natural Language: A First Course in Computational Semantics

[4] McNamara (2010) Computational Methods to Extract Meaning from Text and Advance Theories of Human Cognition. Topics and Cog. Sci. 3(1): 3-17

[5] Haykin (2009) Neural Networks and Learning Machines

[6] Lee, Grosse, Ranganath, Ng. (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Conf on Mach Learning

[7] Learning to Create Jazz Melodies Using Deep Belief Nets (2010) Conf on Comp Creativity.


[9] Churchland, Sejnowski (1992) The Computational Brain. Computational Neuroscience. The MIT Press.

[10] Baron (2013) The Cerebral Computer: An Introduction to the Computational Structure of the Human Brain. Psychology Press

[11] Ciresan, D. (2012). Multi-column deep neural networks for image classification. Computer Vision and Pattern Recogntion


Debate Round No. 3


Thanks! It seems as if there is some serious misunderstanding with how the term semantics is used. Once we see this misunderstanding we will see how a majority of Con’s round fails. I am using semantics in the term of logic. The computer may be able to work with the semantics of language, but its reasoning has no semantics. In that its reason has no conscious meaning. When I say syntax isn’t sufficient for semantics, I am saying the actions of a computer isn’t sufficient for consciousness or intentionality. This has nothing to do with a computer working with language or using language correctly.

Also I meant to copy the other half of syntax definition from the source but forgot to do so. I have no problem with Con’s definition, but be sure to realize when I use the term semantics I am referring to the meaning of the computer’s supposed intelligence. I have made this repeatedly clear. In my main argument I made it clear mental contents is semantics and in my last round I stated “I defined syntax in the last round as symbol processing and semantics as mental content”.

1: Does the CRA affirm the resolution?

Con drops his original argument. Instead Con argues that I must show the CRA is correct.. Remember his original argument was about the CRA not affirming the resolution if it’s correct. Con is arguing something entirely different here.

Rebuttal 2

Con again drops his argument for a new one. Remember in his original round he argued the CRA isn’t computationally intelligent. Now he is bringing up the systems reply and an argument based on the equivocation of semantics.

In the false equivalence section Con presents the systems reply. He states just because the man doesn’t understand Chinese, doesn’t mean the system doesn’t. However, he would have to take the absurd position that an inanimate room is intelligent. This is what defenders of the systems reply advocate for [1] and it’s what they must advocate for. If one doesn’t advocate for a conscious room, then they would agree a system can still act intelligent without being intelligent. This shows the absurd ad hoc nature of the systems reply. Another problem with the systems reply is that it seems the room is irrelevant. Have the man memorize the book and step out of the room. There is nothing more to the system but the man, yet he still doesn’t understand Chinese.

Con then presents a dilemma, but once we understand what’s meant by semantics it’s very easy to see Con’s fallacy of equivocation. Con is arguing the CR must be able to use words semantically (in terms of using language correctly) correct and concluding it is self refuting as it must be semantically intelligent (in terms of consciousness). When laid out like this we instantly see how flawed Con’s argument is. Con needs to understand the CRA isn’t making a statement about using language properly, it’s making a statement about conscious understanding of the language properly.

Con’s next argument is about how computers use language in semantically correct ways. Again the argument makes the same error as his previous one in the equivocation of semantics.

The section has been nothing but a big fallacy of equivocation, the CRA stands.

3: The BOP

Con drops his third rebuttal on the BOP entirely, meaning I win in this section.

4: Connectionism

Con objects the CR being able to learn. This fails because I have shown how it can learn. In my last round, section 2, second paragraph is all about how the CR can change it’s responses and learn. Con has dropped my argument for a changing Chinese Room making his argument refuted. He then goes on to equivocate semantics again.

In the next section Con states the formal structure of the brain isn’t the only thing simulated. However, the abilities dynamics, and plasticity of the brain still must be simulated in a formal way to be considered computation. By the Church-Turing thesis any computer can do the job of another computer. One computer may be faster, but it still can do the same computation as the other. This follows when you realize computation is just an execution of an algorithm. You can execute an algorithm with sticks, doors, ping-pong balls, other computers ,ect. This means if we can create AI through a connectionist neural net, then we can also create AI on a Von Neumann machine which means simulating the formal structure of the brain is sufficient for an actual mind. You can actually run a connectionist neural net on your normal Von Neumann computer, there’s tons of free connectionist programs you can download to do so (they’re really fun) [4][5][6][7]. If we cannot create an AI via the formal structure, then it’s no longer computation and therefore irrelevant to the debate.

Con’s rebuttals have failed. His attacks on the CRA are based on equivocation and misunderstandings. My argument remains standing.

Con’s argument

I would like to note, this argument can only succeed if the CRA is refuted. Since my the CRA remains sound, then Con’s argument fails.

Con disputes my claim that the argument is a fallacy of composition. First, he only responded to my first claim of it. The one where I negated the idea the human intellect is computational, but I had another claim of it. My second claim that if parts of human intellect is computational, this doesn’t entail all of human consciousness is computational. The second accusation remains standing. However, my first claim of it is still standing, because the way we would know human intellect is computaional because parts of it can be simulated.

Con states my back propagation is only how some connectionist neural nets learn, however the very paper he cites which is supposedly a better biological model itself says it’s neural nets learned by back propagation.[8]

Con also says models using back propagation aren’t meant to simulate the brain, despite in the last round he argued they do.

“Neurons can be accurately simulated as binary output units which compute non-linear functions of their summed inputs weighted by the strength of their synaptic connections to input neurons,”

My argument that connectionist machines cannot account for the systematic nature of the mind was dropped and thus remains standing.

Con’s attempt to refute the observer dependent nature of computation never gets off the ground. Nothing Con said was contradictory to the claim that computation is observer dependent. Appealing to humans being able to compute, computation in nature (which supports my argument) and claiming human intelligence can be simulated do nothing to show computation is intrinsic. I can agree with all points, but point out it’s trivial because computation isn’t intrinsic. The statement that since humans can compute, then the brain is a computer is very fallacious. It’s like saying since humans smell, there must be some smelling in the brain. It also assumes a token identity theory of the mind. Although there are many other theories of the mind that say the mind supervenes on the brain, not identical to it [9][10]. Computation in nature supports my argument, as anything can be a computer. It fails as a refutation. As for our brains being computanal by being explained by neural computation, this is too does nothing to refute the argument. Anything can be explained by computation because computation is observer dependent.

My argument is that computation is observer dependent because anything can be a computer by the Church-Turing thesis. Con has failed to show there is something about computation which makes in intrinsic.

My rebuttals remain standing.


Con has dropped a lot of arguments and has made several errors.

  • Con’s first argument was dropped.

  • My argument about a changing CR was dropped

  • Con repeatedly uses the fallacy of equivocation

  • Con dropped his third rebuttal about the burden of proof

  • Con has dropped my argument that connectionism cannot account for the systematic nature of the mind.

My arguments remain standing, whereas Con’s fail. The resolution is affirmed.

Sources in comments



There are several problems with Pro’s rebuttal:

Pro fails to refute most of my points and attacks strawmen instead.

Pro fails to see that I have not dropped any of my arguments.

Pro accepts the definition of semantics I provided and then uses an unrelated definition, ‘mental content’, which itself needs defining. Pro switches between definitions to make his argument work.

Finally, Pro misstates my definition of semantics to say it is only about language. It says that semantics is about what language can refer to (what a dog is, not how the word dog is used). When I say 'semantic computing', I am talking about computers understanding concepts that are not in their programming; not language comprehension.

The CRA does not affirm the resolution
My original argument stands. If the CRA is proven, it does not affirm the resolution because it is only one example of computation. Pro asserts it represents all computation but provides no justification, whereas I have shown why this assertion is incorrect. The CRA is just an example because it does not take into account learning or semantics, both of which are included in modern AI (see the examples I have provided).The CRA is only syntactic and therefore does not represent all programs. Once learning and semantics are added to the CRA, it becomes self-contradictory.

The resolution is not “the CRA cannot be disproved”. Since this is what Pro seems to be arguing, he must show that proving “the CRA cannot be disproved” also proves “AI cannot be achieved through computation alone”, which is what we are actually debating.

The CRA is not correct
Pro’s response to the systems reply is the appeal to intuition fallacy I mentioned in the last two rounds. Pro has assumed that if the man does not understand Chinese then neither does the system (fallacy of composition) because it is absurd to say the system is intelligent (appeal to intuition). This is like saying that since neurons don’t understand words, brains can’t either, and that this is obvious because brains are inanimate objects. This is a fallacious rebuttal.

Pro does not explain how the CRA learns. He asserts that the book can make new rules without explaining how these new rules come to be or how they can be semantically correct. No matter how, if the system can add new concepts in semantically correct ways, it must understand the semantic content of those concepts. Furthermore, in order to distinguish between semantically correct phrases and phrases which are semantically incorrect but syntactically correct, the system needs semantic understanding (not just language). Otherwise the system doesn’t speak Chinese: a contradiction.

Pro has neither met his BoP nor understood the BoP in this debate. To meet his BoP, Pro must show that the CRA is both correct and sufficient for proving the resolution.

Pro’s argument is based entirely on the CRA not being disproven. No notion can be proven simply because its premises cannot be disproven.

The CRA, even if proven correct, is not sufficient to meet the BoP. An example of computation is not sufficient to prove anything about all computational systems.

The CRA has already been disproven by the many existing refutations, as well as the many existing programs which do what the CRA claims is impossible. I have provided references to both of these in Round 3.

Pro incorrectly asserts that neural dynamics and plasticity must be simulated formallyto be computation. Pro ignores the literature I have provided that show direct counter-examples.

Computation is the execution of an algorithm (see definition) and this is what biological brains do [6,7]. Pro strawmans the argument by misrepresenting the Church-Turing thesis, which by Pro’s own source, says that any computation can be performed by a Turing machine, not any computer [5].

This invalidates Pro’s later strawman argument, claiming that I said intelligence can be achieved by simulating the brain’s structure. As I stated, one needs to simulate the structure, dynamics, AND plasticity of the brain.

Pro gives examples of connectionist models as if they represent all connectionist models and as if this shows they cannot simulate brains. Here is the first line from one of his own sources: “ is a comprehensive, full-featured deep neural network simulator that enables the creation and analysis of complex, sophisticated models of the brain in the world” [1]. This contradicts Pro’s own point.
Pro fails to refute my examples or arguments about connectionism. Pro also fails to refute the notion that connectionist models do not share the same limitations as the CRA. The CRA cannot represent connectionist model.

Human Intelligence is Computational
The point of this argument is this: the current scientific theory explaining human intelligence is that it is computational [6,7]. To say that AI cannot be achieved through computation would require the current scientific explanation to be disproved. Pro is either arguing that current scientific theory is wrong, or that human intelligence is impossible.

Pro is incorrect in claiming that this argument can only succeed if the CRA is refuted. The CRA can only succeed if the current theories of neuroscience are refuted.

Pro strawmans the argument again. I repeat: The argument is not that aspects of human intellect can be simulated via computation and therefore [all human intellect] is computational... this is not a fallacy of composition. I also did not argue that since brains can compute they are computational.

Pro misunderstands backpropagation and neural computation. Backpropagation is a technique in machine learning for engineering applications, not a simulation of the brain. The quote you have of me does not describe backpropagation. Ironically, it describes how biological neurons function by computing.

To promote the idea that connectionism cannot account for the ‘systematic nature of mind’ (undefined), Pro cites a philosophy paper from 1988. Modern connectionism came about in the 90’s and revolutionized computational intelligence. The specific example Pro says is unsolvable was solved long ago and is today considered trivial [2,3].

Pro’s assertion that computation is not intrinsic is without evidence. Neurons perform computation (the output is generated from the input given the current state algorithmically). If there were no observer they would still be performing computation. Computation is intrinsic to brains [4,6,7].

Pro fails to meet his BoP on two accounts: the CRA was not proven correct, and the CRA was not proven sufficient for the resolution.

I have shown that the CRA is not correct because it is self-defeating and based on false premises, and I have shown that the CRA is not sufficient to meet the BoP.

Pro fails to understand the meaning of computation. He mistakenly considers computation to be a set of hard-coded instructions. I have provided several counter-examples.

Pro fails to demonstrate any aspect of intelligence that is not computational. I have given examples of connectionist models that exhibits properties that Pro claims would not be possible.

Pro’s idea of connectionism is a poor understanding based on outdated science

Instead of arguing my points, Pro attacks strawmen, falsely claims that I have dropped my arguments, and declares victory. Yet many of my actual arguments have gone unrefuted.

[2] Kuhn & DeMori (1995) The application of semantic classification trees to natural language understanding. Pattern Analysis and Machine Intelligence, 17

[3] Blackburn & Bos (2005) Representation and inference for natural language: a first course in computational semantics
[4] Giunti (2006) Is being computational an intrinsic property of a dynamical system? Systemics of Emergence, 683-694
[6] Haier (2011) Biological Basis of Intelligence. Cambridge Uni Press

[7] Dayan & Abbott (2001) Theoretical Neuroscience. MIT Press

Debate Round No. 4
101 comments have been posted on this debate. Showing 1 through 10 records.
Posted by n7 1 year ago
Didn't have the internet to respond earlier.

Anyway it looks like you're confusing information processing with computation. The brain processes information, but an algorithm is something that is defined by us. We need to define what steps it takes an interpret what is a 0 and what is is a 1. Searle expresses this idea better than me

"To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in, for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics."

The brain however, doesn't process information in this way, as it isn't observer-relative. Sorry it's not in my own words, he explains it really well and I don't have a lot of time right now.

Yes he is talking about if an explanation is good, but he is also talking about interpretation of data like quantum mechanics.
Posted by UndeniableReality 1 year ago
Fair enough.

Something is computational if intrinsic to its functioning computation, or the processing of information via algorithms (I'm not sure I would hold to this definition strictly).

A brain objectively processes information. That is intrinsic to its functioning. A wave in water doesn't even process information if we say so. It does not process information, use algorithms, or make decisions. I was differentiating between what waves in water are and do with what we could potentially do with waves or water (we could make a computer using waves in water + other components).

He's not saying that in the quote you gave me. He is saying that there is a role of epistemological philosophy in science, in terms of how we can tell if an explanation is good (for example, it shouldn't contradict everything else we know, unless it can explain that too), but the formulation of hypotheses, I think, is the main contribution of philosophy to science.
Posted by n7 1 year ago
I am not fully convinced by Penrose's argument, I just wanted to see if you had some thoughts on it.

If I see what you're saying, you are saying the brain objectively processes information and that is computation as opposed to waves which processes information but only because we say so?

I also am not asking a question about the definition of computation, but asking in virtue of what is something computation.

Also Carroll is saying philosophy isn't just good for making hypothesis, but good for determining which one is true and interpreting data.
Posted by chewster911 1 year ago
Congrats on the win
Posted by UndeniableReality 1 year ago
Envisage, you missed the voting period =P
Posted by UndeniableReality 1 year ago
No worries.
When I saw how long my post on definitions was, I was worried myself that someone might think I've already started making arguments =P
Posted by Envisage 1 year ago
Ya, my bad.
<---- Idiot
Posted by Envisage 1 year ago
Nvm... I started to read.
Posted by UndeniableReality 1 year ago
There were no arguments in the first round. The first round only discusses definitions. The opening arguments occurred in the second round.
Posted by Envisage 1 year ago
I thought the format was Con opens in the second round, not first.

Am I supposed to be disregarding adon's opening round since a pro doesn't appear to have pointed it out??
5 votes have been placed for this debate. Showing 1 through 5 records.
Vote Placed by JayConar 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:04 
Reasons for voting decision: 'I?m sorry, but this objection is really bad.' From Pro loses a conduct point. Spelling and grammar was decent from both debaters. In order to win this debate Pro had to prove that computation would never be sufficient to create Artificial Intelligence alone. He did not do this as Con correctly pointed out: 'The CRA is insufficient.' Pro's entire argument was based upon the CRA being sufficient. Thus he did not fulfill his burden of proof.
Vote Placed by IvenMartin 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:05 
Reasons for voting decision: I would like to say that pro initially started with a great rebuttal (?Syntax"). In this debate, it would have been much easier to follow if pro had given an introduction rather than just defining terms- early developmental research, critiques, responses, advancements, and et cetera. Personally I believe that AI can be achieved through sufficient complex computation network with similar properties as the brain. Imagine that we can get a full blueprint of the brain, and make a copy with electro-whatever mimicking the electrochemical activity in the brain and allows it to store information, retrieve it as memories, and convey information (whether through speech development or other). Then let's imagine that this system can process information and calculate responses by analyzing large quanties of information in a short period of time- who knows maybe computers will be smarter than humans one day via artificial intelligence. How? I don't know! Thank you for the enlightening debate.
Vote Placed by chewster911 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:04 
Reasons for voting decision: It looks like Pro did not meet the BoP,since he did not present a strong case for his position,so he loses a conduct point. His only argument was the CRA,and i don't think that it was enough to prove that intelligence is impossible using only computation. Con did a good job of rebutting Pro's contentions,so i give argument points to Con. Everything else is tied. Congrats on an interesting debate to both of you!
Vote Placed by Philosophybro 1 year ago
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:Vote Checkmark--1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:60 
Reasons for voting decision: ok I have changed my mind on this vote 3 times already and i may change again. -conduct for the gish gallop it is like n7 said poor debate conduct. -argument for dropping arguments. I look in UR's round 3 and see he missed the systematic argument and he stopped his gish gallop argument completely. And after alot of reading i see what pro is saying about Con equivocating semantics. Its not about language but about the mind maybe con did this because its a philosophy thing but I still think its a big problem with Con's case. At connectionism it is sorta like my debate with n7 with the Turing thesis. UR says the thesis is about turing machines not computers but i thought computers are turing machines so idk about that one. - sources for the semantics error makes alot of cons sources not relevant. UR says N& misquoted from a source but i looked it up and the same argument is still in there I can clear the vote up if you guys want. My native language isnt English. Good debate anyway
Vote Placed by Tweka 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:05 
Reasons for voting decision: RFD in comments.