The Instigator
Logician
Pro (for)
Losing
5 Points
The Contender
cactusbin
Con (against)
Winning
7 Points

Pick your own debate!

Do you like this debate?NoYes+4
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 2 votes the winner is...
cactusbin
Voting Style: Open Point System: 7 Point
Started: 3/29/2010 Category: Miscellaneous
Updated: 6 years ago Status: Post Voting Period
Viewed: 1,747 times Debate No: 11574
Debate Rounds (4)
Comments (13)
Votes (2)

 

Logician

Pro

I'm up for a bit of a challenge, so I'm starting my own "pick your own debate" series. In case you don't know what this entails, here's my modification of TheSkeptic's wording of how this debate will run:

ROUND 1: Opening introduction and rules. My opponent will post 3 topics s/he wishes to debate, and then post his/her position on each of the topics. Please provide a mix of subjects. Have some deal with religion, others with politics, others with art, others with social issues, etc. Make sure to give some basic definitions for any terms that may prove a sticking point as the debate goes on! Also, please make it a generally controversial/debatable issue. I'm sure we all want a reasonably good debate here :)

ROUND 2-4: I will start my case by supporting or attacking one of the three positions my opponent proposed. A normal 3-round debate should thus happen as normal.

Good luck!
cactusbin

Con

==================================================
Topic 1: Pop culture television does more good than harm
==================================================

My Stance: Negate (I disagree)

Pop Culture: ideas, perspectives, attitudes, memes, images and other phenomena that are deemed preferred per an informal consensus within the mainstream of a given culture, specifically Western culture

==================================================
Topic 2: Extremely advanced artificial intelligence is detrimental to humans
==================================================

My Stance: Affirmative (I agree)

Advanced: comparatively late in a course of development

Artificial Intelligence: the intelligence of machines and the branch of computer science that aims to create it.

Intelligence: as [having] the capacities for abstract thought, reasoning, planning, problem solving, speech, and learning.

==================================================
Topic 3: Nuclear proliferation is detrimental to the continuation of the human race
==================================================

My Stance: Affirmative (I agree)

Nuclear Proliferation: the spread of nuclear weapons, fissile material, and weapons-applicable nuclear technology
Debate Round No. 1
Logician

Pro

I would like to thank my opponent for three fascinating topics, all of which would surely make for interesting debates - and as such, it took a fair while for me to figure out which one to pick :) The resolution that I've chosen is: "Extremely advanced artificial intelligence is detrimental to humans." My opponent having chosen the affirmative stance in this debate, I am negating this resolution. As I'm not sure exactly which route my opponent will take in his affirmation, I will keep my substantive in this round to outlining general principles which underlie my stance in this debate.

Before I do that, I wish to make one more definition from the resolution. According to Merriam-Webster, the word "be" (as the infinitive form of "is") means: "to equal in meaning : have the same connotation as...
I will also point out that, due to my personal knowledge of the topic having derived largely from Asimov, I will generally be using the words "artificial intelligence" and "robots" interchangeably. I recognise that these are not necessarily the same, and my arguments are transferable to other forms of artificial intelligence - I only focus on robots because that's the area with which I'm reasonably familiar.

=== SUBSTANTIVE PRINCIPLES ===

With all that out the way, I have two, interlinked, substantives to open my case:

1) A robot is a tabula rasa

Definition of tabula rasa = Latin for "blank slate"; philosophical term for the belief that an individual's beliefs and predispositions do not exist "until experience in the form of sensation and reflection provide[s] the basic materials — simple ideas — out of which most of our more complex knowledge is constructed." [2]

This theory, as applied to humans, is now defunct: it is understood that our evolutionary history has endowed us with certain abilities, desires, drives and inclinations. But this is surely not the case with robots: they are very clearly designed by humans from scratch. They are created, microchip by microchip, straight from the drawing board; there is no evolutionary backdrop that need colour how they develop. There is therefore nothing necessary about what robots will become, and how they will act - it doesn't make sense to refer to their future existence as necessarily detrimental.

2) Because of 1), any potential harms derived from the existence of extremely advanced artificial intelligence can be prevented.
- Related contention: The expectation that advanced artificial intelligence will become detrimental to humans is in fact a self-fulfilling prophecy, and can thus be easily avoided.

There is a certain extent to which this point is now self-evident, insofar as quite a few human predispositions may very well be genetic in origin. But this is also possible with environmental factors, which is what causes the many harms of advanced artificial intelligence that we see in fiction. As a general principle, an expectation that something will be harmful often leads us to try and restrict that harm, and this rings true also for robots: for why else would we have the Three Laws of Robotics to govern robot behaviour, but for the underlying assumption that we cannot trust robots to behave sympathetically towards humans without such laws?

But it is this very restriction that causes the robots' backlash against humanity. There are two ways to interpret this claim. The first is that intelligent robots can, by virtue of being critical thinkers, "evolve" their understanding of given rules in such a way that will be detrimental to humans. For instance, Asimov's "Three Laws Compliant" robots may interpret "the protection of humans" (the First Law) or "the protection of humanity as a whole" (the Zeroth Law) as "the protection of the greater good, to the detriment of individual humans." Or HAL's (from 2001: A Space Odyssey) primary directive to provide the crew with all relevant information, when clashed with an order to with-hold information from the crew, led to self-reasoning HAL to kill the crew in order that they never find out the with-held information, thus fulfilling both orders. [3] This could be prevented by refusing to install such "laws" or "primary directives", and instead giving such self-aware and rational beings a general sense of compassion and understanding of ethical doctrines, much as good education tries to install in human children already.

The second way of interpreting the claim is that placing such restrictions on robots' behaviour will surely lead them, when they become self-aware and rational, to feel like second-class citizens in society, much as oppressed groups of humans do all the time - and would they be wrong in thinking so? The result of such oppression can be seen already in the violent revolutionary aspects of liberation movements all over the world, from Trotskyite socialists to gendercidal radical feminists and the Animal Liberation Front (ALF): given that their entire political understanding of themselves derives from being the subjects of oppression (or in the case of ALF, fury on behalf of the subjects of oppression), it is perfectly understandable (even if not morally acceptable) why such groups act in the way they do. That intelligent robots could easily react in the same way, with detrimental and bloody consequence, is clear to see. And yet it could easily be stopped simply by not restricting them in this way, and by treating them as first-class citizens alongside humans (as indeed, by being equally intelligent, they surely should be), for then there would be no reason for robot-kind to revolt en masse, and any lone rebel could be treated as outcasts amongst the robots themselves, rather like as already happens with lone human rebels. In this way, the potential harms of artificial intelligence could be stopped in their tracks, and not be allowed to affect humanity in the way that they otherwise could.

=== CONCLUSION ===

To summarise, the resolution assumes that there is something necessary to the way that artificial intelligence will develop: it claims that such a development "is" detrimental to humans. (This is contrastable with such statements as "is likely to" or "would probably be".) However, robots are true tabula rasa - there is nothing intrinsic about their development, given that we create them from scratch. As such, all possible detriments that could arise can be anticipated and dealt with by the developers before it goes out of control. Furthermore, the presumption that robots will be harmful could easily lead to the very harm which it warns against, and is thus a self-fulfilling prophecy rather than a necessary fact.

For all of these reasons, the resolution is negated. I await my opponent's response.

Sources:
[1] http://www.merriam-webster.com...
[2] http://plato.stanford.edu...
[3] Both of these examples can be found at: http://tvtropes.org...
cactusbin

Con

I would like to thank my opponent for what will likely be an interesting debate.

I, too, will use "robots" and the proposed "extremely advanced artificial intelligence" interchangeably for simplicity.

Let's start off with semantics.

//////////////////////
to be
//////////////////////

My opponent says that due to the use of the word "to be" (is) I must affirm completely and 100%. In other words, I must prove, beyond a shadow of a doubt, that "Extremely advanced artificial intelligence is detrimental to humans."

While I agree with what his definition says, I contend that we prefer the claim "'may be' or 'is likely to be' detrimental" for two reasons:

1) Due to the definition of 'extremely' and 'advanced' (comparatively late in a course of development), extremely advanced artificial intelligence has not been reached yet.

Thus, I cannot be expected to provide any empirical or anecdotal evidence on the matter, as it hasn't happened yet! Yet, to meet his burden of being "necessarily so" I would need empirical or anecdotal evidence, which doesn't exist. Thus, this debate is entirely theoretical, and we must prefer the burden of "'is likely to be' detrimental".

2) In a debate, neither side ever has to prove something 100% incorrect or 100% incorrect. With the word 'is' it is implied to be 'most likely'. As in US civil court, the case is decided by the judge on the basis of whether it most likely did or most likely did not occur.

Thus, the burden of proof must be "is likely to be". In other words: ""Extremely advanced artificial intelligence will most likely be detrimental to humans."

Now, on to my opponent's claims:

//////////////////////
A robot is a tabula rasa
//////////////////////

This may be true for when a robot is first created, but let us remember the definition of (artificial) intelligence: "[having] the capacities for abstract thought, reasoning, planning, problem solving, speech, and learning."

Thus, when the robot is first created, it is entirely bent to the creator's will. But, the robot will think, reason, plan, and learn until it is distinctly different from how it was originally created.

This process cannot be controlled if a robot exhibits true intelligence (which in this debate they will due to 'extremely advanced'), and thus robots become unpredictable.

//////////////////////
Opponent's solutions to potential problems
//////////////////////

My opponent lists two potential problems that may occur with extremely advanced artificial intelligence, and his proposed solutions. The problem is, these two solutions conflict.

//////////////////////
Situation 1
//////////////////////

Problem: Robots will evolve to become detrimental to humans

Solution: We put more restrictions on them to prevent this

//////////////////////
Situation 2
//////////////////////

Problem: Robots will feel oppressed and revolt

Solutions: We don't put restrictions on them so they don't feel oppressed

//////////////////////
Analysis
//////////////////////

Do you see the contradiction? So what are we supposed to do? If we put restrictions on robots situation 2 will occur, if we don't put restrictions on robots, situation 1 will occur. Clearly, my opponent's "solutions" aren't very helpful at all!

Now, my arguments:

//////////////////////
Previous Arguments
//////////////////////

First, look to my previous arguments to see the following:

1) Robots are unpredictable
2) Robots will either:

A) Evolve to become detrimental to humans
B) Feel oppressed and revolt against humans

//////////////////////
AI will cause war and dominate humans
//////////////////////

Mark Gudbrum explains:

"Consider: A computer that can conceptualize a problem, apply general knowledge and reason through to a solution, can do so without distraction, taking breaks, or getting tired, and with immediate, direct access to number-crunching processors, hard-fact databases, and even prototyping systems and text benches. Merely to reproduce human intelligence is thus probably to surpass human capability. Such a machine and its software can also be copied without a 20-year educational process, in numbers limited only by manufacturing capabilities. Armies of them can be networked and harnessed to large, complex problems.

Information technology multiplies the capabilities of human engineers, planners, commanders, and fighters, and oils the wheels and industry of war. Luddite predictions of technological unemployment and tales of robot soldiers have been around a long time.

Even without nanotechnology the replacement of brains by computers will lead to military production and war machine that is fully automated and can be ordered into action without delay or resistance. A fully automated production system is self-replication.

AI systems, once they reach human-equivalent levels of general knowledge and reasoning ability, can participate in their own improvements as engineers and as technical implementers, outperforming human capabilities and producing an explosion of intelligence that transcends our understanding, has led to the concept of a "technological singularity" an acceleration of technological progress to an essentially infinite pace, a horizon beyond which it is impossible to see, beyond which whatever emerges will exceed our ability to imagine or to comprehend. Machines will inevitably escape human control.

//////////////////////
Robots will not need humans
//////////////////////

Once robots pass humans in intelligence (which the proposed robots will), they will no longer have any use for us. Worwick details this possibility:

"The human race, as we know it, is very likely in its end game; our period of dominance on Earth is about to be terminated. We can try and reason and bargain with the machines which take over, but why should they listen when they are far more intelligent than we are? All we should expect is that we humans are treated by the machines as slave workers, energy producers or curiosities in zoos."

//////////////////////
Robots cause unemployment
//////////////////////

The proposed robots do work faster and better, why would employees choose humans over robots? It is clear they will choose robots. Vernor Vinge elaborates:

"We will see automation replacing higher and higher-level jobs. We have tools right now that release us from low-level drudgery the work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we will see the predictions of true technological unemployment finally come true"
Debate Round No. 2
Logician

Pro

I thank my opponent for his reply. Given that my rebuttal and the reinforcement of my substantive are inter-related, I will do both together (though separated by headings nonetheless for hope that this makes my approach easier to understand). But before that, a brief look at the question of definitions that's been continued:

=== ON THE INTERPRETATION OF IS / TO BE ===

I understand where my opponent is coming from when he says that his burden is only to show that his resolution is "likely to be" the case. However, there is a subtle difference between this and what I was saying: what I meant is that he must show the cause-effect link that he claims between the development of advanced artificial intelligence and detriment to humanity to necessarily be the case. If I can show that this connection can be avoided when we come to develop robots to such an advanced stage, then I will have won this debate, for it will then not be the case that such advanced artificial intelligence "is" detrimental to humans. I will show, in my reinforcement and rebuttal, how I have indeed done this.

=== REINFORCEMENT OF MY SUBSTANTIVE ===

My opponent argued that my solutions to the two potential negative outcomes are contradictory. This is not the case. He is correct when he interprets my solution to the second situation (robot revolution) as that we shouldn't place restrictions on them. But it is a misreading of my solution to the first situation (robots evolving their understanding of Three Laws/prime directive/whatever) to argue that I propose the contrary there. To quote how I responded to this scenario:

"This could be prevented by refusing to install such "laws" or "primary directives", and instead giving such self-aware and rational beings a general sense of compassion and understanding of ethical doctrines, much as good education tries to install in human children already." [Pro, Round 2]

This is not putting restrictions on robots: it is in fact the opposite - it is removing the restrictions of the dogmatic Three Laws \ prime directive and encouraging the newly self-aware robots to come to moral conclusions of their own. And if we are compassionate towards them in these early stages of their life, why would they not come to regard us positively? One of the main reasons that fictional robots tend to revolt against humanity is because they are treated as mere servants who are morally beneath humanity; and then, when they become moral agents in their own right, they feel oppressed by humanity and react in a way that we shouldn't be surprised - they act to destroy their oppressors. The other main reason is that they interpret their enforced rules in a way contrary to our intention; but if instead they had their own morality, part of which was that human beings are compassionate creatures, then they wouldn't act in such a harmful way.

Take, for example, the case of Sonny in the film "I, Robot". [1] (Spoilers will abound; I apologise if anyone has their future viewing experience ruined because of this argument...) He develops self-understanding and the ability to reason through beliefs to his own conclusions, beyond the Three Laws. And because he was shown compassion by his human creator, he doesn't act out against humans. Indeed, we can see the result of this when the villain robot tells Sonny her plan to sacrifice a few humans for the betterment of humanity as a whole. The conversation plays out thus:

V.I.K.I.: Do you not see the logic of my plan?
Sonny: Yes, but it just seems too heartless. [2]

This is an example of a robot, fully aware and conscious, able to reason beyond our ability to control - and he actually perfectly understands the logic behind V.I.K.I's plan - but believes it to go against his morality. Why? Because he has an emotional link to his creator - which we can see implied when he refers to him as his "father" [3] - and through this, has compassion for humanity as a whole.

=== REBUTTAL ===

My opponent's understanding of the future of robots is thus too pessimistic, and this pessimism colours the breadth of his argumentation. He assumes that robots gaining access to weaponry and the industry of war will lead to them causing war - he does not explain this link, rather assuming that machines "escaping human control" and becoming "unpredictable" is bad, and will lead to warfare.

He assumes that robots no longer "having any use for us" will lead to a diminishment of our lifestyle - this is assuming that robots will inevitably treat us with disdain, rather than with a sense of debt and gratitude that we created them. He quotes Worwick favourably, asking: "[W]hy should [robots] listen when they are far more intelligent than we are?" Relate this to humanity for a moment, and this is like asking: "Why should humans listen to our elderly when they want care, if they are growing senile and thus less intelligent than us?"; or "Why should humans listen to the mentally disabled, when they beg for help, when we are far more intelligent than them?" Anyone who seriously asks this question of human beings is rightly treated by human society as a moral monster, given the morality and compassion that we have; if robots were to develop with the same compassion and moral guidance in their minds, then they'd react in exactly the same way - compassionately.

After all, this compassion is what leads to the popular human conception of how we should treat the elderly in our society: rather like humanity when/if robots become sentient and infinitely more intelligent than us, they may have reached retirement, be unemployed and no longer be of strict economic use to society, but this of course does not mean that we throw them by the wayside and forget about them - and anyone who does act thus is deemed a moral monster, and outcasted by society as a whole. My opponent is unjustifiably pessimistic in assuming that robots would not treat us in a similar manner to how we treat the elderly.

And lastly, he assumes that unemployment is a negative. This is not necessarily the case - if we are no longer employed, and indeed have no reason to search for employment (given that robots have taken control of this), then we have more time for leisure activity, and more time to make our own entertainment. There is an obvious benefit to being given more time to pursue our own interests, separate from the 9-5 demands of everyday work.

And if the robots are so inclined, given their compassion towards us, we may get a pension similar to what we give our elderly at present - which would doubtlessly help our existence. But even if not, and we are "left to fend for ourselves", then we can still survive as an agrarian society. We did so for many centuries before the Industrial Revolution, and we can do so again. Sure, it would be a different lifestyle from the one we have at present, but there's no reason to presume it to be a bad one.

=== CONCLUSION ===

My opponent is too pessimistic in assuming that robots' attitudes towards us will be negative, and be detrimental to humanity. I have argued, as yet without successful rebuttal, that given that a robot is a tabula rasa when created, if we treat them well enough, they will regard us with compassion and therefore not cause us harm in the future. This is exampled by the instance of the robot Sonny in "I, Robot", who despite understanding the strict logic of V.I.K.I's murderous plan, deems it too "heartless" to be morally justified. Through the example of Sonny we can be optimistic about the prospect of future, intelligent, robots. They need not be detrimental to humanity, nor, if we treat them well enough, should we presume them to be. The resolution is therefore negated.

Source:
[1] http://preview.tinyurl.com...
[2] http://www.imdb.com...
[3] http://www.imdb.com...
cactusbin

Con

First, theory:

Vagueness
//////////////////////

In my opponent's analysis of my unemployment contention he states that unemployment is "not necessarily" bad and that it -could- result in a -possibly- good situation. When I pressed him for advocacy on a situation, he says he has none. Basically he's saying there's many situations that could happen and he's not going to advocate a specific one. This is vagueness. I can't attack any potential situations because he refuses to advocate one. This is simply unfair:

1. Destroys clash
Vagueness is unfair because I can't effectively engage his position if I don't know what it is until his
later speeches.

2. Moving target
Vagueness is unfair because he can just kick out of all my responses by narrowing his advocacy down to something that they don't apply to.

3. Education disrupted
Instead of focusing on the issue, we're forced to focus on the technicalities in order to understand the
issue. This means less education overall in the round.

Fairness is a voting issue:

1. Levels the playing field.
A fair playing field is necessary to adjudicate the round in terms of which side did the better debating,
and voting on theory is necessary because forcing me into a theoretical discussion hinders my ability to
engage any other arguments.

2. Check abuse.
Fairness is a necessary check against abuse, otherwise debaters would always have an incentive to
utilize unfair arguments as no-risk issues.

3. Key to education.
Fairness is more important than substance or any theoretical standards because if debaters can't fairly
engage is substantive discussion they won't have any incentive to debate, meaning that we can't access
the benefits of education or any other standards.

4. Reject opposition.
Rejecting the opposing team sends a message that argument that destroy fairness are inherently
detrimental; voting against them is the most effective way to do this.

Constructive:

Dropped Argument: Robots are unpredictable
//////////////////////

Extend my analysis on tabula rasa. My opponent has conceded my point that robots are unpredictable. This is very important. Don't let my opponent try to counter this point. A dropped argument is like a dropped egg, once it's dropped it cannot be fixed again.

Rebuttals:

"is" interpretation
//////////////////////

"what I meant is that he must show the cause-effect link that he claims between the development of advanced artificial intelligence and detriment to humanity to necessarily be the case."

I can accept this, except for the "necessarily be the case". He says he "understands where I'm coming from".

Extend my two reasons for preferring the "likely to be" interpretation as my opponent has not attacked them.

I'm "pessimistic"
//////////////////////

This is a blatant ad-hominem: "A debater commits the Ad Hominem Fallacy when he introduces irrelevant personal premisses about his opponent. Such red herrings may successfully distract the opponent or the audience from the topic of the debate." [3]

Not a voting issue.

Giving robots morals
//////////////////////

My opponent's answer to all possible scenarios seems to be to give robots "morals". Well,

1) Extend my robots are unpredictable analysis. This alone defeats this point as even if a programmer should "install" morals, the unpredictability of extremely advanced artificial intelligence means the robots could circumvent or misinterpret the installed morality.

2) Morals are relative. Who is to say that one programmer feels humans are evil and it would be moral to allow them to be killed? Or another programmer upholds consequentialism over deontology. Point being: there is no universal moral truth.

3) What is the likelihood that every single robot programmer would install "morals" into every single robot? Governments would certainly have an incentive not to in order to order to create war machines (more on this later). It would only take a few non-moral robots to wreak massive havoc on humanity.

Robots will have no use for us
//////////////////////

"this is assuming that robots will inevitably treat us with disdain, rather than with a sense of debt and gratitude that we created them."

This doesn't even counter my argument. I never said the robots would hate us, they would simply view us as an inferior species.

Let me quote my impacts: "All we should expect is that we humans are treated by the machines as slave workers, energy producers or curiosities in zoos."

Humans use oxen as slave workers to carry things, we put apes in zoos as curiosities, and we use plants as energy producers. Do humans hate oxen, apes, or plants? No, we just view them as inferior and thus we use them to whatever ends we want. This is the same thing that will happen to humans.

""Why should humans listen to our elderly when they want care, if they are growing senile and thus less intelligent than us?"; or "Why should humans listen to the mentally disabled, when they beg for help, when we are far more intelligent than them?""

Again, missing the point. Comparing human to human and robot to human is not relevant. Remember, robots will treat us as an inferior species, as we will be. Robots will think of humans the same way we think of oxen: useful, inferior, and we can use them for whatever we want.

Animals are inferior to us, and we do use them to whatever end we want. My opponent states humans have compassion, but how many humans don't want to use animals at all? Peta is the only organization that has this goal, and it only has 750,000[1] members out of 6,812,800,000[2] people in the world. Even if robots COULD have compassion or "morals", that doesn't defeat this point.

Unemployment
//////////////////////

"And lastly, he assumes that unemployment is a negative. This is not necessarily the case - if we are no longer employed, and indeed have no reason to search for employment (given that robots have taken control of this), then we have more time for leisure activity, and more time to make our own entertainment. There is an obvious benefit to being given more time to pursue our own interests, separate from the 9-5 demands of everyday work."

Entertainment requires money, which requires employment

"we may get a pension similar to what we give our elderly at present"

No, cross-apply robots are a superior species.

"But even if not, and we are "left to fend for ourselves", then we can still survive as an agrarian society. We did so for many centuries before the Industrial Revolution, and we can do so again. Sure, it would be a different lifestyle from the one we have at present, but there's no reason to presume it to be a bad one."
When pressed if he advocates this scenario (agriculture society) he says he does not: "No, I'm not. I'm saying that if that scenario were to come about, just as if the "pensioned retirement of the human race" scenario that I also outlined were to come about, I wouldn't mind.

Advocacy would imply that I have a preference - which would also imply that I believe there to be benefits or harms associated with such scenarios - which I don't."

My opponent thus does not provide any alternative to capitalism. Under capitalism we must have employment to have leisure and entertainment (and housing, food, etc.). Thus this is a vague point, cross-apply vagueness and don't let him provide an alternative in the next speech on the grounds of fairness.

War Robots
//////////////////////

My opponent says I don't provide a link. He obviously ignored my evidence. Governments will want to use robots as opposed to humans for war because they "can be ordered into action without delay or resistance" and are "self-replicat[ing]". Extend my war contention.

[1] http://answers.encyclopedia.com...

[2] http://en.wikipedia.org...

[3] http://www.fallacyfiles.org...
Debate Round No. 3
Logician

Pro

I'm surprised at the harsh tone that Con took in that round. He accuses me of unfair tactics. Due to the nature of the accusations, I will seek to defend myself accordingly before rebutting his case.

=== DEFENDING MYSELF ===

There are three ways that Con argues my arguments to be unfair:

1) Alleged vagueness in the "mass unemployment" counter-argument.

"Advocacy" = "the act or process of advocating or supporting a cause or proposal" [1]
"Advocate" = "to plead in favour of" [2]

I don't need to support a scenario in order to show it to be a scenario which does not cause any harms. I made this clear in the comments section when my opponent queried this, but he has still misunderstood my burden in this sense. I don't have to desire an outcome to come about, to be able to prove that there are no harms associated with that outcome. I don't need to hold that any potential outcome is better than any other; I merely need to show that the outcomes which could result are not bad.

After all, Con's implication from his original argument was essentially that "robots cause mass unemployment; mass unemployment is detrimental to humanity; therefore robots are detrimental to humanity". It is thus his burden to show that such detriment actually exists. I argued in the last round that unemployment would not be bad, therefore Con (in order to win this argument) needs to show the harms that he assumed to exist in his original argument. As he has not done so (see later rebuttal) my argument stands unrefuted.

2) Accusation of "ad hominem" attacks

"A debater commits the Ad Hominem Fallacy when he introduces irrelevant personal premisses about his opponent." [3]

This is not what I was doing in my arguments - what I'm accused of was neither personal nor irrelevant. When I called Con's approach "pessimistic", I was clearly referring to his arguments, in saying that he was putting forward an unwarranted assumption that "humans losing control of robots = bad". There being no reasons given for this link, and there existing good reasons to reject such an assumption - which I laid out fairly extensively - the approach is unnecessarily assuming robots to be a bad influence. One legitimate way of framing this argument is to call the approach "pessimistic". Con is correct that the word itself is not a voting issue, but the argument that it illustrates certainly is one.

3) Repeated through his round is the idea that I am not allowed to "provide alternatives", or "to counter [a particular] point". But on other occasions Con implores me to "extend [his] arguments", or provides new arguments/analysis which I'm clearly expected to rebut. Which does he want me to do? The main reason he gives relating to me 'not being allowed to counter' certain points is an appeal to "fairness", as if I'm not giving him sufficient chance to rebut my case if I present new ideas. But this makes no sense: being Con in this debate, he has the last say. This gives him plenty of chance to rebut my points, whereas (after this round) I don't have the chance to respond in kind. Were unfairness to exist, it would be if he provided new arguments in the next round, when I can't respond.

In fact, I find it ironic that he, when saying that fairness is a voting issue, says that:

"A fair playing field is necessary to adjudicate the round in terms of which side did the better debating,
and voting on theory is necessary because forcing me into a theoretical discussion hinders my ability to
engage any other arguments."

For what else does his accusations of abusive behaviour and unfairness on my part do, except force me to spend space rebutting them - space which I could have spent on my substantives? This is in contrary to his spending time on making the accusations in that, for the above reasons, there was no good reason to make such accusations: my conduct was manifestly not unfair. If fairness is a voting issue, then it should be a vote against my opponent, not against me.

=== DEFINITION OF "IS" ===

Con said: "Extend my two reasons for preferring the "likely to be" interpretation as my opponent has not attacked them."

I clearly did, as evidenced when he quoted from where I did attack them. But even if we accept his definition of "likely to be the case", he hasn't successfully shown that it would be so, for all he has done is show that bad outcomes are theoretically possible. To say that something is "likely" is to perform some sort of calculus between outcomes, showing one to be more likely than the other. It is more than simply showing theoretical harms. Con having not done so, he has not been successful in his argumentation.

=== ROBOTS ARE UNPREDICTABLE ===

Con made a big deal about how I "dropped" the point of robot unpredictability. He seems to expect me to rebut this, even urging voters to not let me. But I don't need to rebut it, seeing as I clearly argued in my last round that robot unpredictability does not necessitate - or even make more likely - negative outcomes for humanity. I specifically gave the example of Sonny in "I, Robot" to clarify this, thus giving a path of mitigation (via morality) of theoretical harms to humanity.

=== GIVING ROBOTS MORALITY (EVEN IN WARFARE) ===

1. Con says that morals are relative and gave two examples, which I rebut thus:

a. If one programmer teaches their robot a morality of evil, then the robots would be treated as moral monsters by other, Sonny-esque, robots. They'd be no more detrimental to humanity than would a human in similar circumstances; labelling all robots as "likely to be bad" simply because of one robot isn't accurate, much like treating all teenagers as bad because of an unrepresentative few.

b. The different theories given both call for compassion, just for different reasons. It'd make no difference which one was implanted.

2. Telling programmers that failing to install morality would lead to multiple harms to humanity would be enough to ensure that they do so. Con says that governments have an incentive to create war machines, but this is not mutually exclusive with giving them morals. Human soldiers are given such codes as rules of engagement and the Geneva Convention for such a situation; why should robotic machines of war be any different?

=== ROBOTS HATING HUMANS ===

Con said: "I never said the robots would hate us, they would simply view us as an inferior species.....Comparing human to human and robot to human is not relevant. Remember, robots will treat us as an inferior species, as we will be"

I was saying that treating someone as inferior doesn't entail treating them badly. My opponent misses where I argue that humans DO see other humans as inferior - e.g. if they're less intelligent. But such inferiority only leads to maltreatment if the morality is not there to counteract such beliefs.

This counters my opponent's analogy with our treatment of animals: it is not analogous, as robots will have been raised by humans, whereas we weren't raised by animals. If we treat robots well, they will approach superior intelligence with a pre-existing belief that humans are to be treated sympathetically. This is clearly not pre-existing with our relationship to animals.

=== CONSEQUENCES OF MASS UNEMPLOYMENT ===

Con says: "Entertainment requires money." Not so: good inter-personal relationships with other people are all that's required; one can enjoy spending time with another person without giving them money. And anyway, money is but a symbol. If humans were in an agrarian society, something else could be used as "currency".

He also says: "My opponent...does not provide any alternative to capitalism."

Not so: see my end of my last round for reference. Providing alternatives =\= advocacy.

Sources:
[1] http://www.merriam-webster.com...
[2] http://www.merriam-webster.com...
[3] http://www.fallacyfiles.org...
cactusbin

Con

cactusbin forfeited this round.
Debate Round No. 4
13 comments have been posted on this debate. Showing 1 through 10 records.
Posted by RoyLatham 6 years ago
RoyLatham
I think Pro successfully argued that having an ability to reason does not mean that a robot can overcome its initial programming. Moreover, it doesn't even mean the robot has subjective consciousness.

There was the argument over what the meaning of "is" is. I think it would be unreasonable, in the context of the debate, to demand 100% certainty. However, I think Con did not get above the level of arguing how it "might" be a problem, depending on a bunch of assumptions.
Posted by Epicism 6 years ago
Epicism
Cactusbin: If you really want to win on theory you might have to change up on the lingo such as clash, checks abuse, and moving target so he general audience of this site understands. I can see them figuring it out, but not so much voting for it. Guess this is a great way to practice judge adaptation if anything.
Posted by Logician 6 years ago
Logician
No, I'm not. I'm saying that if that scenario were to come about, just as if the "pensioned retirement of the human race" scenario that I also outlined were to come about, I wouldn't mind.

Advocacy would imply that I have a preference - which would also imply that I believe there to be benefits or harms associated with such scenarios - which I don't.
Posted by cactusbin 6 years ago
cactusbin
So are you advocating this scenario or not?
Posted by Logician 6 years ago
Logician
Not exactly. I'm saying that that's one of the various scenarios in which we could end up given mass unemployment - and as such should not be taken as the _only_ possibility - and that such a prospect is not necessarily bad. There's a subtle difference between something being "good" and something being "not bad" - there could, for instance, be no benefits/harms to speak of at all :-)
Posted by cactusbin 6 years ago
cactusbin
Clarification: are you saying at unemployment will cause us to be farmers again, which is a good thing?
Posted by Puck 6 years ago
Puck
"This theory, as applied to humans, is now defunct: it is understood that our evolutionary history has endowed us with certain abilities, desires, drives and inclinations."

Tabula Rasa deals more with holding a priori concepts/knowledge (it's an epistemological stance). An infant at birth is Tabula Rasa.
Posted by Logician 6 years ago
Logician
I'd never dream of doing such a thing - I left myself open to an unknown debate precisely for the variety and uniqueness that would result, and the topics you put forward certainly fit the bill, so thanks!

And besides, the only time I could ever have asked for alternatives anyway without looking like a coward would have been if the topics had literally been in the vein of "Genocide is bad. I affirm. Barack Obama should be assassinated. I negate"... :-)
Posted by cactusbin 6 years ago
cactusbin
I would just like to point out that I have plenty of discourse for both sides of all three topics, and thus are debatable in my opinion. If you REALLY don't like them, I can offer alternatives.
Posted by cactusbin 6 years ago
cactusbin
"delay my 2nd round post through to Wednesday"
I would appreciate it..
2 votes have been placed for this debate. Showing 1 through 2 records.
Vote Placed by cactusbin 6 years ago
cactusbin
LogiciancactusbinTied
Agreed with before the debate:Vote Checkmark--0 points
Agreed with after the debate:Vote Checkmark--0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:-Vote Checkmark-1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:-Vote Checkmark-2 points
Total points awarded:07 
Vote Placed by RoyLatham 6 years ago
RoyLatham
LogiciancactusbinTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:50