The Instigator
lazarus_long
Pro (for)
Losing
13 Points
The Contender
Tatarize
Con (against)
Winning
22 Points

We will eventually see the rise of artificial intelligence, and even artificial consciousness.

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Vote Here
Pro Tied Con
Who did you agree with before the debate?
Who did you agree with after the debate?
Who had better conduct?
Who had better spelling and grammar?
Who made more convincing arguments?
Who used the most reliable sources?
Reasons for your voting decision
1,000 Characters Remaining
The voting period for this debate does not end.
Voting Style: Open Point System: 7 Point
Started: 1/4/2008 Category: Science
Updated: 9 years ago Status: Voting Period
Viewed: 2,333 times Debate No: 1365
Debate Rounds (3)
Comments (12)
Votes (9)

 

lazarus_long

Pro

The possibility of artificial intelligence is intriguing at least in part due to the fact that we have so poor a grasp on the question of intelligence and "thinking" in general. What does it mean, really, to be "intelligent?" What is really going on when someone is "thinking?" While all of us might have some opinions on these questions, it's very difficult to see how anyone could actually claim to know the answers beyond any reasonable doubt. If we truly did understand "thinking," for example, one would think that the question of building a machine to perform that function would be trivial – either that, or we would know, clearly and unambiguously, that the construction of an artifact which could think is, in fact, impossible – and we would know exactly why.

But we don't really know what intelligence or thinking truly are. Lacking this knowledge, we're forced to fall back to some simpler statements of what we do know, and then to try to derive some better questions from there. We all (presumably) know that thinking exists, since we have direct experience of it. It also seems reasonable to assume that other people – those other creatures we see who look, act, and in general seem to be sort of like us – are also "thinking." But we have to admit that we do not, and perhaps cannot, truly know that this is so, at least not without some careful consideration. How is it that we can really say we know that someone else thinks, or is "intelligent?" As noted, we really base our assumptions here on the fact that other human beings seem to be "like us," and we already know that we think. But what if you encountered a form of life that wasn't at all like us in appearance, but seemed to behave or communicate like we do? What if, on the other hand, you found a living being who looked human, but who didn't at all behave or communicate like a human? To what degree might we say that other creatures on the Earth "think" – even though they don't look or act anything like us? Or what of the unfortunate person who, through injury to the brain, no longer behaves or communicates in a "human" manner at all, but who in purely external appearances obviously is still "human?" Appearance alone does not seem to be much of a basis on which to judge intelligence.

Intelligent non-humans have also long been a staple of science fiction; we don't think twice about an SF story which features sentient aliens, and seem very willing to accept them even though they differ from us (in some cases, to an extreme degree) in appearance. But most people would claim that this is because, strange though these imaginary beings might be, they are still described to us as living beings – they are biological in nature. It seems fair to point out, though, that our bias toward granting living beings the ability to think is simply due to the fact that we have yet to experience non-biological thinking entities. But this really is no evidence that such things are impossible. At last, then, we come to the supposed main question: is it possible that a non-biological entity, and in particular a created entity – a machine – can legitimately be said to think? How can you really tell if someone (or something) other than yourself is "thinking?"

In 1950, Alan Turing, one of the pioneers in computer science, proposed a rather simple test to answer this question. He described an experiment: Suppose you are in a room which contains a communications device of some sort – perhaps a PC running "chat" software. You can use this to converse with a "person" (exactly where they are is unknown, and really doesn't matter), and you can say or ask anything you want. But the "person at the other end" can be either an actual human being or a machine, and in this test they are in fact "switched in" at random. Turing claimed that if you could not reliably tell when you were talking to the human and when you were talking to the machine, then you had no choice but to recognize the machine as "thinking" at least as well as the human.

Machines which are capable of passing the Turing test are still yet to be developed. However, given the pace of advancement in the computer industry over the past 50 years, few today doubt that this will eventually be achieved, and possibly fairly soon. The average person is becoming more and more comfortable accepting the notion of a machine which can "think" in this context. But, many of us say to ourselves, it's not "really" thinking, it's just a simulation of what humans (or if we're feeling more generous, "living things") do. When pressed to elaborate, most of us will resort to making a distinction between "thinking" as Turing means it, and actually possessing "consciousness."

But this winds up at an argument precisely analogous to what we just covered with respect to "thinking." Each of us knows "consciousness" exists because we each experience it. We assume others like us also possess "consciousness," to a great extent because they seem to be "like us" and because they claim to be conscious, too. Again, though, we have to admit that we will never truly know if another being is conscious. It is always possible that the other beings we are dealing with are simply simulating consciousness well enough to enable us to believe that they are. We have nothing comparable to the Turing test for consciousness, and so wind up believing that other beings are conscious basically because they tell us that they are.

What, though, if a machine – an artificial intelligence – were also to make that claim? On what basis could we accept or reject it? Failing a clear test for "consciousness," would we be forced to also "take the machine's word for it" in order to be consistent with how we treat similar claims by living beings? There's a sort of "back door" approach that is sometimes argued here – in the case of human beings, the "test" that is often applied for at least the capacity for consciousness is the detection of brain activity, as may be shown via an "EEG." A permanent loss of such brain function is called "brain death" and is commonly used as a determining factor in saying that the "person" in question is dead – whether or not the brain is physically intact (in a gross sense) and the other biological functions of the body continue on. "Consciousness," this supposedly tells us, relies on something more than merely being alive. But this argument works at least as well the other way: if "brain waves" are the test for the capacity for consciousness, then are we not saying that consciousness is inseparably linked to a purely physical phenomenon? And if consciousness arises from purely physical functions, it seems reasonable that it could arise in other "purely physical" entities of a nature other than biological – a mechanical and/or electrical being, for instance.

Last-ditch attempts to keep "thinking" and "consciousness" as solely belonging to humans (or at least, living things) will then generally be forced to turn to claims of a "spirit" or "soul" as the quality which must forever distinguish natural living entities from artificial ones. But an honest inquiry here, it seems, must come to the conclusion that demonstrating the existence of a "soul" in humans and other "biologicals," and/or
denying its existence in artificial beings, is at least as difficult as clearly answering the questions of "intelligence" and "consciousness." Unless is it argued that "soul" merely means the same thing as "consciousness," we cannot even claim to have direct experience of our own "souls" as a distinct property. The distinctive feature claimed for "souls" is that "soul" refers to a quality of "self" which survives physical death – i.e., it is something separate from, and not dependent upon, mere biological living. But if this is so, then it again seems that we have no basis for denying the possibility of a "soul" in an artificial entity; in fact, it would seem very arrogant to do so!
Tatarize

Con

Let me make a note concerning the topic on the grounds of time:
--- "We will see" is taken to mean us, within our lifespans, we will witness this. There is an inherent time limit to the topic.

We will not see the rise of artificial intelligence or artificial consciousness.

I assert this on four grounds.

1) AI is a complete failure.
2) AI is impossible.
3) There is not "artificial" intelligence. There is only intelligence.
4) You cannot simulate the soul.

Artificial intelligence has been a mainstay of science fiction as well as actual computer science. We have been very active in the field of AI since the 1960s. When I say we, I mean computer scientists like myself. We have worked tirelessly to create large information bases, create extremely complex programs, process large amounts of data, formulate huge parsing algorithms on the front and back end of massive programs for the purpose of creating AI.

We have 40 years of research under our belts, by some of the greatest minds in computer science, we've developed LISP and Prolog, both of which have made huge contributions to the field of computer science. After these 40 years of intense research by some of the greatest minds in the field, how close are we? If we take a look at our progress over these four decades and extrapolate how far we've come and look at how far we have to go, when can we estimate completion?

Let's do the math:
40 years has gotten us 0% of the way. So, at this rate we're going to finish in an infinite number of years!

That's right. Nothing! Not even a hint of a shadow of a clue. The idea that we're magically going to solve this problem within our lifetime even though the last lifetime didn't do anything productive is an odd one. You know what happened to those brilliant minds who dedicated their life to this idea? They are all elderly or dead!

I'm sorry but you can't just come here and say, maybe we'll figure something out and understand the human brain and that'll make it possible. Even if we somehow magically took this massive leap forward, how do you know that would even help? We understand exactly how fusion works and scientists understood it for longer than we've been alive and we won't see it rise up anytime soon.

The idea that "we" will see this happen, is based on nothing more than a pipedream. What is going to happen in the next 40 years that the last 40 years of brilliant minds are going to get to work? The AI programs we've developed with all this research are beyond pathetic. They can almost make syntactically valid sentences. Woohoo! If only they have any depth or process something more than a few quick text parsing.

Do you have any idea how complex the human brain is? In 40 years we've done jack squat! And you're arguing that some magical pipedream is going to suddenly become reality and instantly unlock this lock? What makes you think that? What makes you think that the last 40 years of "any day now" is going to magically become true? This same sort of future prediction has said we'll see it any day now for the last 40 years!

-----

We could, in theory, simulate the processes of every molecule within the human brain. We previously simulated every molecule in a biological virus for 1/50th of a second using a massive array of supercomputers a number of months. If we could, through raw processing power, expand this to include all the molecules of a human brain: we could have the computer actually have human intelligence.

The processing power is firstly, more than our lifetime away from making this a reality. And, would not be artificial intelligence... it would be actual intelligence. AI is a simulation of intelligence not the real thing. If you could make a real thinking computer you would make an intelligence not an artificial intelligence. You can't make a computer program to pass the Turing test unless it actually was intelligent, you couldn't make it artificial and just pretend to be intelligent to a sufficient degree. You need the real thing to pass, not a simulation which is the heart of AI.

-----

Godel's incompleteness theory holds that in a system you cannot prove the validity of a system without being incomplete or incoherent. A computer program to simulate intelligence would need to either fail by being incomplete or would be incoherent (which computers can't do). Godel's proves that AI isn't possible. This is why we have failed and will continue to fail our lifetimes and beyond.

-----

No computer can simulate the human soul. The reason we can't figure out what that spark you so desperately want to find is because it doesn't exist in the material brain! That's the soul of man and computers are soulless machines. You can't compile the human soul into machine code, or run it on a processor. You swear that if we just take a step back we'll figure it all out and then AI will become something more than a directionless bit of science fiction. Fat chance! Understanding the human soul has been tried for thousands of years, it has been as successful as AI has. -- Exactly as successful.

-----

We will not see the rice of artificial intelligence and certainly not artificial consciousness. As you freely admit, we don't even know what the hell is meant by intelligence OR by consciousness, and we're going to simulate them? No, we aren't. We're going to go nowhere just like we have from the early days of the science. The only people who got any benefit out of this have been science fiction authors.

You simply can't meet the burden of proof here, pipedreams don't count. Perhaps aliens will come down and give us AI (not that we'd see it "rise"). Perhaps a big rock will kill us all tomorrow... that's certainly lower the chances we see the rise of AI.

We will sooner have those DAMNED FLYING CARS they promised than AI. At least, we know it involves wings, an engine, and a car. -- That's infinitely more than know about the requirements of AI.
Debate Round No. 1
lazarus_long

Pro

First of all, thank you for accepting the challenge of this debate. Hopefully, this will be an interesting topic to explore from both sides.

I apparently need to clarify something, though - it was not my intent, in writing the topic as "We will eventually see the rise of AI" to set a particular timeframe for this; I meant the "we" in the most general terms, as in "mankind will eventually see." I certainly did not mean to imply that I expect this to happen in the next 40 years. I would not be surprised if it DID occur in that time, but I'm not expecting it. I mean the debate to center on the possibility of AI/consciousness, not whether or not it will turn up per any particular schedule. Fortunately, I don't believe either of us have said anything yet that is particularly time-dependent, so I hope we can continue on under this clarified topic.

Now, as to my opponent's arguments, which he summed up as follows:

"1) AI is a complete failure.
2) AI is impossible.
3) There is not "artificial" intelligence. There is only intelligence.
4) You cannot simulate the soul."

These are certainly strong assertions, but so far that is all they are; I am afraid I was expecting a bit more in terms of reasons, beyond simply saying "It's impossible" or "You cannot do this."

Like my opponent, my career has been in the computer industry; in fact, that's where I have spent my professional life for nearly 30 years. If there is one thing that experience in the technology field has taught me, it is the truth of Arthur C. Clarke's first two Laws, to wit:

1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.

A bit tongue-in-cheek, perhaps, but they make some very good points. Throughout the history of scientific and technological development, it has always been very risky to label something "impossible" without some very good reasons for doing so.

And do we have any reason for considering AI or artificial consciousness to be impossible? No, we don't, for the simple reasons (as given in my opening statement) that we do not as yet understand the nature of either very well. But would not that also mean that we have no reason for considering these things POSSIBLE? Not at all - on the contrary, we know that both intelligence and consciousness ARE possible, for the simple reason that each of us is an example of an intelligent, conscious entity. Unless it can be shown that there is something "special" about humans which means that ONLY human beings (or even ONLY living creatures) can be conscious or intelligent, we can only assume that these qualities CAN, somehow, be duplicated through artificial means.

The human brain is indeed a wondrously complex and poorly-understood thing - but we can make several incontrovertible claims regarding it. First, it clearly can support both intelligence and consciousness - and equally clearly, the human brain is a finite entity. What we experience as consciousness or intelligence goes on within a space of about a liter and a half, a mass of a bit under a kilogram and a half, and with a power consumption less than that of a typical desk lamp. We do not know how this biological thinking machine works, but we know that it does, and we know that while it is extremely complex, that complexity is not infinite in extent. The brain itself is an "existence proof" of a thinking, conscious machine.

But we then run into the objection that there is somehow more to intelligence and consciousness than this organ we call the brain. Again, as was noted in my opening remarks, this brings up the question of a "soul," some ineffable quality or aspect unique to human consciousness which simply cannot be duplicated by artificial means. This assertion, though, raises a number of its own problems. First, does this not confuse the terms "intelligence," "consciouness," and "soul"? The latter two might be merged, but then are we saying that ONLY those creatures with "souls" can be "intelligent?" On what grounds? At that point, the debate clearly leaves the realm of reason and science and becomes a matter of theology - and unless someone can bring objective evidence to the table regarding such things, there seems little that they can contribute to this debate. In the absence of such evidence, the notion that "intelligence" or "consciousness" requires a "soul" is merely unsupported assertion.

My opponent suggests that AI is "impossible" because of the lack of positive results to date. This is somewhat akin to someone in late 1903 looking back over the hundreds of years of attempts to create flying machines, and concluding from that history that the Wright brothers would be better off sticking to bicycles. My opponent saying:

"Let's do the math:
40 years has gotten us 0% of the way. So, at this rate we're going to finish in an infinite number of years!"

is an example of just this sort of unjustified pessimism. First, there's no basis for saying that we are "0% of the way" - the simple reason that we do NOT know how to create AI at this point also means that we can't possibly know just how far along "the way" we are. So this isn't a case of "doing the math" at all - it's a case of trying to make an unsupportable assertion look stronger by hanging some numbers around it.

We have NOT been idle for 40 years in this regard. The rate of progress in the fields related to AI have been staggering. Certainly no one would have expected computers of the very limited nature seen to date to support anything remotely resembling real intelligence; it is not unreasonable to expect that an artificial intelligence will require a level of complexity similar to that of the only intelligent machine we know, the brain itself. And we are making very rapid progress in those areas; this past decade has seen significant developments in the areas of quantum computing and molecular-level devices, which are just starting to come out of the lab and to permit practical computing hardware of a much higher level of sophistication. This is not to say that I am expecting AI-capable hardware to turn up next year or even in 10 or 20 years. But if you look at the rate of development which has happened in the last 40 years, it's very difficult to argue that we will not see computing hardware which today is unimaginable at some point in the future.

I would also suggest that we not try to make too much of the term "artificial intelligence," as in "there is no such thing, there is only intelligence." While I agree that a truly intelligent machine would be "intelligent" without any additional qualifiers needed, it is conventional to use the term "artificial" in such discussions not to mean a "simulation of intelligence," but rather in the proper sense of the word - that said intelligence would be a product of "artifice," i.e., a man-created thing. Call it what you will.

The appeal to Godel's Incompleteness Theorem as an argument against the possibility of AI is invalid. Godel's theorems simply state that it is possible in any axiomatic system to create valid statements which cannot be PROVEN valid within that system; but that does NOT mean that those statements are not valid, simply that they cannot be proven so. If this ruled out the possibility of artificial intelligent machines, it would also rule out NATURAL intelligent machines (e.g., the brain itself), UNLESS something unique to those "natural" intelligences can be demonstrated to exist. My opponent is certainly free, as noted above, to argue that a "soul" (or the equivalent) is somehow necessary for intelligence, but I believe he will find it difficult to back such an assertion with evidence.
Tatarize

Con

I'm sorry, but I feel that "we" as a general rule includes me. Although I'd be free to note that we might very well wipe ourselves out tomorrow and certainly fail to succeed in any such projects. In fact, with world affairs as they are today it might be exceedingly optimistic to suggest we'll last the 40 years I proposed. I'm fairly certain that despite your claims to the contrary, you cannot suddenly change terms in mid swing. The same claim has been made for decades that "we will see the rise of AI" and it has consistently meant "we" as in "us" will see this event within our lifetime.

However, the fact that you don't expect AI in this lifetime is certainly not much of a ringing endorsement for your position. Usually when "we" gets vaguely applied to mankind in some sort of general vague fashion it is because the proponent of the statement realizes how unfeasible the claim is within recent history.

-

The basis of your argument is, "The future is up in the air, you can't prove it impossible, therefore it will happen." Sorry, no. The fact that we have done nothing in the field outside of fail splendidly is at the least important.

You think we need to go back and rework some stuff and figure out exactly what we are dealing with in order to code it. We need to figure out what intelligence is and figure out what consciousness is and then program them. We are starting the process quite literally at step 0. We don't even understand the terms. You've offered nothing to suggest that we are pushing toward this goal or doing anything more than the last 40 years of treading water on the subject.

You are attempting to do three distinctly unacceptable things here:

1) You are trying to predict the future to unreal extents. Beyond wanting to predict what will happen 40+ years from now, you want to render predictions beyond our lifespans and what science will be able to do at such points in time. I have failed to see *ANY* prediction worth spit even five years out. The entire realm of predicting future events seems to be a complete joke.

2) You are attempting to parlay the ambiguity of the future into a positive claim. See, the future is so totally up in the air, you can no more conclude AI thusly than you can a race of future supermen is clearly a possibility and thus will arise.

3) You are shifting the burden of proof. "Prove that it is impossible that we will see a race of supermen arise!" -- I'm sorry but no. You are attempting to state an affirmative claim that we will see the rise of AI on the grounds that we don't know what is going to happen in the future.

-

You do, at the very least, go to the trouble of showing that since the brain can do it shouldn't the computer be able to do it? My answer to this is "no."

First let me say that there is a stunningly large gap between "not impossible" and "will eventually".

However, computers are at the very least great calculators with useful states and functions to produce rather amazing results. However, there is generally a rule that things which computers can do easily humans can do very poorly and things humans can do well computers can do poorly. For example anybody submitting a comment on this debate will submit it with a captcha, a bit of image text easily converted into letters by humans. However, coding a computer to do even this very minor task is nearly impossible. This is but one is a massive list of things which computers simply can't do and humans easily can.

You ask that computers be able to do all of this. Beyond excessively powerful OCR capabilities it will need to perfectly understand human speech from the get go, understand what words mean, understand abstract concepts and do all thing things humans brains are great at and computers simply cannot do. And how do we know this will happen? According to your argument we know this will happen specifically because we *don't* know what will happen.

That sound you just heard, was me being underwhelmed.

-

This is a bit more than the simple arguments of elderly scientists suggesting that ideas are wrong, it is simply because these scientists have dedicated the better parts of their lives to this challenge and failed to get beyond step 0 that they are saying it isn't feasible.

If we take a good long look at the successes in AI what we see is some interesting programming languages and some neat algorithms which are auxiliary to the field. Beyond that, we see that there is nothing. Not one even partially successful implementation of anything close to the goal of AI. In fact, we don't even know what is meant by intelligence in order to have a first step towards programming it into a computer.

You say you were expecting something more than noting that it isn't possible to program AI and that it's a impossible pipedream? What exactly did you expect? I am to fawn over the possibility that the entire field does something worthwhile and suddenly gets beyond step 0 to an understanding of what intelligence is and assume to know what intelligence and that it can be programmed?

You say that "we will eventually see the rise of AI" -- on what grounds? Feasibility? We might as well argue that we'll have really good computers able to predict the future to a near certainty where not disallowed by quantum physics. Rightly they should be able to trivially predict human behavior and calculate what we will do. This might just be feasible, but it still won't happen in our life times and we will never see it.

We've tried AI. It has been a huge and massive failure. And I think it is incumbent on you to provide some reason why the future will be any brighter for the field.

The idea that we need a step back and to relook at the situation isn't new either. Take look at some of John McCarthy's (Coined AI) comments on the topic back 9 years ago: http://news-service.stanford.edu...

"Despite the progress that has been made in AI since he coined the term "artificial intelligence" in the 1950s, the research has not brought the field within development range of simulating human intelligence, which is its ultimate goal, he said. "We still need new basic ideas. So it may take five years, and it may take 500 years. The understanding of intelligence is a hard scientific problem."

This idea that we should go back to basics and figure out this idea of intelligence is obvious, it has been obvious for the last four decades when everybody who realized how unsuccessful AI was continued to fail. What we have is a complete and utter pile of crap. We haven't done anything towards the goal of creating AI and our few rare successes have been in technological regions nearby.

Firstly, I think you wrongly discount any interplay with the soul. There's a difference between just outright disregarding the idea and having shown it's false. If we have souls than AI isn't possible. I thusly think it's at least a requirement by your side to disprove the soul rather than discount it on no grounds. I hold that the complete failure of AI is, in part, due to the impossibility to code that little extra part that makes us human in the transcendental sense.

-

You compare my claims to impossible flying machine claims. This is wrong. There were plenty of flying machines prior to 1903 but nobody could steer. If we had some okay ideas about AI and kind of off programs... sure. But we have NOTHING! Not even programs that make you go... "that was neat."

-

From what we do understand of intelligence. A lot of it seems to be borrowed from culture and others. You couldn't have an intelligent machine on the grounds that machines aren't part of culture. They lack the human spark and, on top of that, let us not overlook the unmitigated disaster of AI. Your argument is a pipedream built on a pipedream. It is a reversed argument from ignorance: We don't know what the future holds, so it holds AI.
Debate Round No. 2
lazarus_long

Pro

Sigh. I had hoped that we would be able to have a civil discussion regarding the reasoning and evidence available on the likelihood (or not) of the development of artificial intelligence/consciousness. Apparently that will not happen this time around, but perhaps I will re-start a debate along similar lines in the near future. In the meantime, let's wrap this one up. First of all:

"I'm sorry, but I feel that "we" as a general rule includes me. Although I'd be free to note that we might very well wipe ourselves out tomorrow and certainly fail to succeed in any such projects. In fact, with world affairs as they are today it might be exceedingly optimistic to suggest we'll last the 40 years I proposed. I'm fairly certain that despite your claims to the contrary, you cannot suddenly change terms in mid swing. The same claim has been made for decades that "we will see the rise of AI" and it has consistently meant "we" as in "us" will see this event within our lifetime."

You're certainly free to make that interpretation if you like, but again it is not what I had intended in this debate and I can hardly be accused of "changing terms in mid swing" merely on the basis of how YOU view one particular two-letter word. If the subject as stated was unclear to you - and apparently there was some question there, as you felt it necessary to state your interpretation at the start of round 1 - then you should've asked for clarification via a comment prior to starting. As it is, I clarified my intent - which has been consistent all along - as soon as I was asked.

Now, on to where we stand at this point. My opponent's objections to the notion of AI and/or artificial intelligence seem to wind up equating to three very simple statements:

1. We haven't succeeded yet, therefore we can't.

2. This is a very hard problem, therefore we can't solve it.

3. We can't possibly give a machine a "soul," therefore we can't have AI/consciouness.

Regarding the first two questions, my opponent quotes AI pioneer John McCarthy of Stanford as saying that "it [AI] may take five years, and it may take 500 years" - but conveniently neglects to quote the next part of the article containing this comment, which is:

"Nevertheless, the computer scientist remains confident that creating artificial intelligence is an achievable goal. This confidence, he acknowledged, is rooted in his materialist worldview. 'Human intelligence is carried out by the human brain. If one material system can exhibit intelligence, why can't another?' he said."

McCarthy is hardly alone in this opinion. Significant research into the field continues at practically every major university in the world, with McCarthy's own Stanford and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) probably being the two most widely-publicized and well-known. I had the good fortune to visit CSAIL myself last summer - while I am not in the AI field directly myself, it has been an ongoing interest of mine - and they are certainly doing some extremely interesting and significant work there. If interest, readers may see much, much more about this at:

http://www.csail.mit.edu...

Suffice it to say that there is a very large number of very good people who believe AI is a problem that is capable of being solved so as to devote their entire professional careers to it.

Of course, though, they could be wrong, and I have no intention of leaving this with a mere appeal to authority on the subject. My opponent accuses me of arguing in favor of AI based on mere "possibility, that "we don't know what the future holds, so it holds AI." Not at all - clerly, any argument in this area has to be based on probability, not mere possibility. But my fundamental argument here is the same as McCarthy's above - we ALREADY KNOW that intelligent machines are possible, simply because each of us has one in our heads already. That the machine in question is biological rather than electrical or mechanical in nature is irrelevant; it is quite clearly a finite physical device which exhibits all the characteristics that we associate with "intelligence" and "consciousness." There is no reason whatsoever to assume, therefore, that intelligence arising in other systems, at least those of similar complexity, is either impossible or unlikely. On the contrary, we would expect such to be virtually inevitable, given the regularness with which we see this occuring in biological systems.

The only remaining objection my opponent can raise to counter this, in fact, is an appeal to "the soul" - the notion that there simply IS "something special" which humans, or at least living creatures, possess which a "machine" (an artifact) for some reason cannot. I dealt with this issue in round 1, but we can certainly revisit it here.

In this argument, my opponent is doing precisely what he accused me of in round 2 - attempting to "shift the burden of proof." I make the claim that intelligence is virtually inevitable, based on an existence proof - we already see it arising in finite bioloical systems - to which his only recourse is to say "but there's no soul there, so it can't happen!" The burden of proof is not on me, though, to show that a "soul" exists and/or is responsible for intelligence or consciousness. I made no such claim at any point in my arguments, and in fact have noted that there is absolutely no hard evidence or reasoning which would lead one to conclude that there ARE such things or at least that they would for some reason be required in an intelligent system. I am firmly in the camp of such people as philosopher Daniel Dennett who, in such works as "Consiousness Explained," has made a very strong case for the phenomena of consciousness and intelligence being explainable on purely physical grounds. If my opponent wishes to make the claim that there is "something else" required, whether we call it a "soul," a "spark," or even magic pixie dust, the burden is upon him to show why would should believe this requirement exists, and that in fact the supposed "something else" exists in the first place.

He's correct in saying that IF there is a soul, then it would be impossible to expect consciousness to arise in a "soulless" machine (well, at least if he can also prove that machines are by necessity "soulless" - do we in fact HAVE any sort of unambiguous test for the presence of "souls?"). But in that case, his entire argument rests on his ability to demonstrate the existence and necessity of said "soul." Absent that ability, we would have to conclude that not only is consciousness possible, it is very likely inevitable - since we see it happening all the time in biological systems.

And please note that I have at this point separated "consciousness" from "intelligence." There is clearly some fuzziness between the concepts of "consciousness" and "self" and the supposed "soul" - but my opponent has given no reason whatsoever for extending this link to "intellgence." As noted originally, we have no tests for "intelligence" beyond those based on external appearances; a machine which demonstrates intelligent behavior on the same order of that of a human must be considered "intelligent" (and, as my opponent already pointed not, not merely "artificial" or "simulating" intelligence) in precisely the same manner as the human. But there is clearly nothing in this external behavior which requires or demonstrates the existence of a "soul" or even "consciousness." (If there were, we would have unambiguous tests for these as well.) So the whole "soul" argument fails completely when we restrict the question to intelligence, and has little independent place in the question of "consciousness." And since there IS no evidence or reasoning to say that a "soul" exists in the first place, this whole line of argument is meaningless.
Tatarize

Con

The Pro Argument has Failed.

He never met the burden of proof problem. My arguments that if there is a soul AI isn't possible and perhaps we simply can't solve the problem due to human limitations are attacks against the argument he never made... the argument that we will eventually see the rise of artificial intelligence. He NEVER defended the core claim, rather he suggested based on assumptions that it *MIGHT* be possible.

Let me bring forward an analogy:

-- Topic: We will eventually find BIGFOOT.
- Argument in favor: There is nothing which suggest BIGFOOT is biologically impossible.
- Argument against: We've spent more the 40 years of consistent time LOOKING for BIGFOOT and our efforts have produced nothing! Further, BIGFOOT might not even exist and thus will never be found.

The Pro conceded that "Of course, though, they could be wrong," with respect to AI researchers. How then can one conclude the topic statement with such a concession. At best one could only conclude that We will PERHAPS see the rise of AI. Just as we will PERHAPS find BIGFOOT.

My point about the soul, the human spark, isn't that I have conclusive evidence on the subject, rather it should be taken on faith, the point is that it *MIGHT* be true. And even if it *MIGHT* be true then then such a conclusive topic will certainly fail. As my opponent conceded that "He's correct in saying that IF there is a soul, then it would be impossible to expect consciousness to arise in a 'soulless' machine" -- I need not meet the burden of proof on the suggestion, because simply the possibility that the suggestion is true suffices to refute the topic. This is the reason why the Pro must actively disprove the soul and prove materialism in order to conclude that the "rise of AI" is an "eventuality". Despite his claim that the "burden of proof is not on him... to show that a 'soul' exists" -- it is, so long as he wants to conclude something more than "we might see the rise of AI".

-----

Credit where credit is due:

Pro offers that:

"On the contrary, we would expect such to be virtually inevitable, given the regularness with which we see this occuring in biological systems."

This is, admittedly, an argument in favor of the strong claim made in the topic. Though, he does couch his words with "virtually" which is a far cry from "absolutely" which would be needed. Secondly, spontaneous AI creation should be reserved for science fiction... outside of Skynet gaining conciousness out of the blue for no reason at all or computers suddenly gaining absolutely amazing abilities they aren't programmed for and are architecturally limited from doing this defense falls spectacularly short of meeting his burden.

-----

Allow me to address his last post:

I don't know where you got this idea that this wasn't civil, or are you saying it doesn't address the likelihood?

I still find it hard to believe that "I" will see the rise of artificial intelligence *beyond* my lifetime. That seems the logical interpretation. You didn't state that as a term in the intro and I'll leave it up to the readers to decide. Though, my case is pretty strong anyhow.

You attempted to sum up my argument as:

1. We haven't succeeded yet, therefore we can't.
2. This is a very hard problem, therefore we can't solve it.
3. We can't possibly give a machine a "soul," therefore we can't have AI/consciouness.

These are made to appear overly pessimistic and do not represent my opinion.

I do not conclude that it can't be done, rather that we very well might not be able to do it. We are, ourselves, limited in a number of ways. It might be that we simply are overlooking an amazing little trick of how intelligence works, but it might also be that we are too limited to understand ourselves on that level, it could be that the number of processes trivially done by the human brain is too great to be accomplished

I didn't neglect that part of McCarthy's quote, it just wasn't relevant to the point I was highlighting. Again, we have the materialist assumption and still allow for the possibility that we might not be able to understand it even if it is entirely material.

I never suggested there wasn't research into AI. In fact, I have repeatedly used the fact that we've had pretty straight forward and consistent research into AI for the last 40 years and have failed even the most rudimentary processes. Some of the "outcroppings" of AI have been fantastic. Prolog in it's own right is a fantastic program. Clearly a failure as AI but a shining success as a programing language. A large number of people have dedicated their lives to AI only to wind up old with nothing to show for their efforts. It's an interesting problem, but we don't even know if it can be solved so jumping to the conclusion that it will be solved is a bit of a stretch.

AI research has been vigorous as has research into Bigfoot... they have been equally as fruitful. Am I to accept that a naturalistic possibility of Bigfoot meets the burden of proof for the claim that we WILL EVENTUALLY see Bigfoot?

-----

Thank you, Lazarus, for the debate.
Thank you the readers for reading the debate.
Thank you the voters for registering your opinions.

Tatarize.
Debate Round No. 3
12 comments have been posted on this debate. Showing 1 through 10 records.
Posted by Tatarize 9 years ago
Tatarize
>>"The brain does far more than learn. Any intelligence requires independent thought."

Independent thought? I think you give yourself too much credit. Our brains tend to sort information we are exposed to, largely social information about this or that, scientific information, even spacial information as to how far away that table is. If, you were, for example not exposed to anything (say locked into a dark room at childhood) you would be unable to interact or to think the way humans typically think. Other than processing of things being learned, what does that entail?

>>"Anything programmed is going to act and react in whatever manner it was coded to begin with."

Categorically false. If that were the extent of the program or requirement thereto, it would be impossible to code. But, you are confusing the limits of the algorithm with the limits of the product of said algorithm. The output of said algorithm is heavily dependent on the input (likewise if you read a lot and expose yourself to new ideas and experiences, you'll be more intelligent). My entire point here was that the idea of intelligence as a result was flawed. Intelligence is the process, and the process itself is extremely consistent at the most basic level. The data, however, is inconsistent and causing the learning intelligence to differ greatly even though the core process is the same.

>>Sure. That was a achieved years ago. Not A.I., though.

An amazingly robust version of that (and I do mean amazingly) is different from A.I. how?

>>Well, of course. Comes back to that independent thought thing.

No. The problem of goals isn't the problem of independent thought. If anything it's the problem of implementing motivation, which should be a far simpler problem than the reasoning to that effect and the achieving of the goal. Something simple like "understand the world" while making that goal comprehensible at the lowest level.
Posted by mmadderom 9 years ago
mmadderom
"No. That's actual intelligence. You do need a lot of input, how exactly does that differ from childhood? Your brain isn't hardcoded to do much of the stuff it does. Other than learning what do they do?"

The brain does far more than learn. Any intelligence requires independent thought. Anything programmed is going to act and react in whatever manner it was coded to begin with.

"I can program something to analyze data and act and determine the effectiveness of those actions and adjust itself accordingly until it learns to choose the right action towards some goal"

Sure. That was a achieved years ago. Not A.I., though.

"The problem is the goals need to be given to the program in the first place rather than chosen."

Well, of course. Comes back to that independent thought thing.
Posted by Tatarize 9 years ago
Tatarize
>>>>"It takes one algorithm, time, and a goal."

>>No way. Not unless you are VERY narrowly defining "intelligence".

No. That's actual intelligence. You do need a lot of input, how exactly does that differ from childhood? Your brain isn't hardcoded to do much of the stuff it does. Other than learning what do they do?

Humans aren't a blank slate, and there's a lot of instinct towards the four Fs of human motivation: Feeding, Fighting, Fleeing, and Reproduction. Though, how you push towards those things before you really have the ability to understand what those things involve is a missing step. But the intelligence/learning part is doable.

ELIZA was not a breakthrough. Probably well coded but not intelligent at all. In fact, she can't learn or understand anything. She's based on the odd Turing Test paradigm that AI is chiefly determined by the ability of some interacting program to trick people.

>>You can program data analyzation and you can program a set decision making process based on such, but that's not A.I.

I can program something to analyze data and act and determine the effectiveness of those actions and adjust itself accordingly until it learns to choose the right action towards some goal (in fact, I have many times it's a hundred lines of code). The problem is the goals need to be given to the program in the first place rather than chosen.

An intelligent program would categorically not include a set decision making process, or set data analyzing. A set anything is a problem. In fact, setting those things is exactly what intelligence is/does.
Posted by mmadderom 9 years ago
mmadderom
"Don't let my post below influence your voting."

Nothing ever written in a comment section should influence voting on a debate.
Posted by mmadderom 9 years ago
mmadderom
"If understood, it would be pretty easy to code real AI."

Wow. Bold statement, that.

"Now, I think I actually do understand what intelligence (mind-blowing epiphany last year) is and it would be pretty easy to program."

Remember me when you make your first billion ;-)

"It takes one algorithm, time, and a goal."

No way. Not unless you are VERY narrowly defining "intelligence".

"ELIZA is funny. It was three minutes before starting to talk dirty to her."

Yep. ELIZA was funny, silly even. I don't know how old you are, but that was actually considered a MAJOR breakthrough in AI at the time.

There is no intelligence without the ability to formulate an independent thought process and opinion. That will never be replicated by a machine. You can program data analyzation and you can program a set decision making process based on such, but that's not A.I.
Posted by Tatarize 9 years ago
Tatarize
Don't let my post below influence your voting. I won this debate hands down in part three when I brought the burden of proof argument to bear. I rightfully win.
Posted by Tatarize 9 years ago
Tatarize
mmadderom, there's simply no good understandings today of what is intelligence. If understood, it would be pretty easy to code real AI. Accounting for the things which intelligence does is a massive waste of time, rather focus needs to be placed on what is intelligence.

Though I took con in this debate, I actually believe it will be possible in the near future and probably has for the last few decades (though the understanding hasn't been there).

Had you asked me a couple years back, I would chuckled, as AI is a failure. Should it be possible, yes, trivially so. Has any of our attempts come close? No. Conclusion: We need to understand what intelligence is in order to stand a chance.

Now, I think I actually do understand what intelligence (mind-blowing epiphany last year) is and it would be pretty easy to program. The idea that you need to make every little unit in the brain is wrong or need to replicate human senses is wrong. Heck, you can feed the human brain non-human senses and it'll start using them pretty easily. It needs to be fed raw data and allow in a (very clever/simple, see understanding of intelligence) way to 'understand' the raw data. It doesn't take massive series of ineffective algorithms. It takes one algorithm, time, and a goal.

You are absolutely right that our intelligence is, in a large part, influenced by our upbringing and programming a childhood is nearly impossible. Intelligence is a way of assimilating an understanding about the world. It would be impossible to make an non-unique intelligence, save copying a functional one, but that wouldn't last long.

ELIZA is funny. It was three minutes before starting to talk dirty to her.

The problem is that people constantly focus on the products of intelligence, text recognition, voice recognition, chess games, object understanding and fail at programming AI because what they are busy programing a few fragments of the infinite number of things intelligence can do.
Posted by mmadderom 9 years ago
mmadderom
I guess I'm saying the human experience and senses can not be replicated by a machine and never will be, hence true artificial intelligence can not exist.
Posted by mmadderom 9 years ago
mmadderom
True A.I. will never occur as machines are just that, machines. Their abilities are limited to the abilities of their creator and programmer. Science is no closer to breaking down human intelligence now than it was 40 years ago, so what would make you think that puzzle will not only be solved, but be able to be replicated by a machine in the next 40 years? or 100 years? or 1000 years?

Even if it COULD be replicated to an extent, how would it possibly account for uniqueness of individuals other than randomly? Since our "intelligence" is, at least in part, influenced by our upbringing and surroundings, how could this possibly be replicated in A.I.? Short answer is it can't. Even an army of programmers couldn't account for the various things that go into ones intelligence that makes us all unique.

Best you can hope for is a more sophisticated version of E.L.I.Z.A.
Posted by mmadderom 9 years ago
mmadderom
Let's see. Years of study and millions of dollars spent = no progress.

The gist of your argument is "it will someday be possible in the future so it will eventually happen given enough time".

Could not this same argument be used to back an infinite number of debates?

We don't have a clue how to do it now, but some day we'll be able to surgically implant wings so that pigs will indeed be able to fly. Prove I'm wrong.
9 votes have been placed for this debate. Showing 1 through 9 records.
Vote Placed by Tatarize 7 years ago
Tatarize
lazarus_longTatarizeTied
Agreed with before the debate:Vote Checkmark--0 points
Agreed with after the debate:Vote Checkmark--0 points
Who had better conduct:Vote Checkmark--1 point
Had better spelling and grammar:Vote Checkmark--1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:70 
Vote Placed by zach12 8 years ago
zach12
lazarus_longTatarizeTied
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:-Vote Checkmark-1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:07 
Vote Placed by JOE76SMITH 9 years ago
JOE76SMITH
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by griffinisright 9 years ago
griffinisright
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by Kasrahalteth 9 years ago
Kasrahalteth
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:30 
Vote Placed by VantagePoint 9 years ago
VantagePoint
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:30 
Vote Placed by logicalsoul 9 years ago
logicalsoul
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by solo 9 years ago
solo
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by mmadderom 9 years ago
mmadderom
lazarus_longTatarizeTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03