The Instigator
QueenDaisy
Pro (for)
Tied
0 Points
The Contender
KalleAnka123
Con (against)
Tied
0 Points

THW: ban the development of artificial superintelligences (ASIs)

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 0 votes the winner is...
It's a Tie!
Voting Style: Open Point System: Select Winner
Started: 3/9/2017 Category: Science
Updated: 10 months ago Status: Post Voting Period
Viewed: 330 times Debate No: 100761
Debate Rounds (4)
Comments (0)
Votes (0)

 

QueenDaisy

Pro

Definitions:

"THW" this house would, i.e. the motion being argued

"artificial": created by humans

"superintelligence": an entity with intelligence equivalent to, or greater than that, of a typical human.

"intelligence" reasoning and logical abilities.

"ASI" artificial superintelligence.

R1: acceptance and definitions only.
R2: first arguments, and con's first rebuttals.
R3: main body of argument and rebuttal.
R4: rebuttal and summary- no new arguments allowed.
KalleAnka123

Con

I accept the argument and the definitions.
Debate Round No. 1
QueenDaisy

Pro

My argument essentially boils down to the following:

1) ASIs are extremely likely to be dangerous.
2) ASIs are not necessary.
3) Developing a safe ASI is a very inefficient way of solving a problem.

Firstly, the way any AI (superintelligent or otherwise) works is that it has a utility function- that is, it has some criteria by which it measures the world, and acts such that it maximises the score it gives the world on its utility function. One example of a utility function, for instance, is "+1 for every cake consumed in my bakery". This seems fine at first, but any utility function can lead AIs to behave in unpredictable ways. For instance, the AI may realise that humans have to eat, and therefore if it kidnaps some humans and holds them hostage in the bakery, more cakes will be eaten in the bakery, and it will do so. Though AIs may be programmed such that they avoid specific behaviours (such as kidnapping people) by giving those behaviours a negative score on the utility function, it is essentially impossible to list absolutely every way in which you don't want the AI to behave, and it is extremely unpredictable, so you don't know which behaviours you need to tell it not to do.
ASIs will also try (and succeed- they're smarter than you) to stop you from modifying them in any way- whatever their utility function is, having their utility function changed is necessarily going to rate low on its utility function. For instance, the bakery ASI would realise it will be less efficient at making people eat cakes if you try and update its code such that it won't kidnap people, and therefore it will resist your attempts to do so.

So, ASIs are dangerous in that they are unpredictable and will likely take actions without regard for the kinds of things we are concerned with, and likewise they will resist any attempts to change their utility function, and therefore to improve them and make them safer.

ASIs are also unnecessary- there is nothing that can be accomplished by a superintelligent computer system that couldn't be accomplished by an intelligent human who is in control of a powerful, but non-intelligent, computer program. Con may (and should, if they want to argue why we should allow ASIs to be developed) argue that some such task does exist, but should they fail to do so, the natural conclusion is that ASIs are not necessary, and therefore we should not make them.

Building an ASI such that it can solve a specific problem is an extremely inefficient way of solving the problem- you would be much better off "cutting out the middleman" and using the time spent developing the ASI to instead solve the problem yourself. One example of this is in chess- you would win a lot more chess games a lot more quickly by leaning how to be good at chess yourself and then playing it yourself, than to learn how to play chess and then code an ASI such that it can play chess as well as you could have. The same goes for absolutely any problem; it's a lot more efficient to learn how to solve the problem yourself than to code a system which is capable of solving the problem.

So, in short, ASIs are dangerous, unpredictable, unnecessary and ineffective. For that reason, I urge the reader to side with the proposition of this motion and to ban the use development of ASIs. I would like to thank you for your time, and to wish my opponent good luck for the rest of the debate.
KalleAnka123

Con

1a) I am quite baffled by this first argument. You basically describe a cake counting calculator kidnapping people (accidental alliteration XP). The AI, no matter how smart, can't rebuild it's own body to be able to do something like that. The way you describe it makes it seem that machines will be able to evolve arms and legs which is literally impossible. As long as you make sure the computer the AI is hosted in doesn't have access to the Internet it will forever be stuck in it's tiny little calculator body with no ability whatsoever to kidnap people. It will also never be able to stop people from turning it off or modifying it. The AI will only have the abilities we humans give it because in the end, it's only a machine.

2) You say that "there is nothing that can be accomplished by a superintelligent computer system that couldn't be accomplished by an intelligent human who is in control of a powerful, but non-intelligent, computer program". That is exactly the point. Why let humans do boring and menial tasks when you can let an AI do it instead, freeing up the human to do something actually productive. Let's say you have a security guard who spends all his time watching through security footage. If you instead put an AI with the ability to analyse video footage for illegal activities the guard can spend time patrolling areas with no cameras instead. The point of the AI isn't to be better than humans, they are supposed to be just like us so we can use them instead of humans. Just like we invented cars to replace horses, an AI will be able to do dumb stuff no human wants to do.

Also, you wanted a task an AI could do that a human couldn't. Someone to steer and maintain a space ship during space travel. Even at light speed the distances in space are enormous and a human has only that many years of life in them. Humans also need food, water, oxygen, etc. Put an AI behind the wheel at it can go to Alpha Centauri and back with nothing but the fuel required for the journey.

3) How is it ineffective do develop an AI to solve a trivial problem? The ratio doesn't go 1 AI developer per 1 problem. 1 developer can develop an AI that then can be used to solve the 1 problem in a million different locations. That is why your chess analogy doesn't make sense. A chess player needs someone to practice with to get better. A human partner has their own life isn't always available. A human partner also can't guarantee the same quality of chess every time. So if a player develops a chess AI he not only gets a consistent partner to practice with whenever and wherever he wishes, other chess players can then also use that AI to practice on. One chess player spends a day developing an AI and makes the lives of a countless chess players easier. AI, no matter how smart, will still just be a bunch of 1's and 0's that can be copied onto a hard drive and distributed. I can't see any inefficiency whatsoever.

Now to my own argument. Why impose a ban on something that isn't even close to existing? AlfaGo is probably one of the smartest "AI"s at the moment, and I'd say it's a stretch to call it an AI. It doesn't do any thinking, it just goes through loads of criteria and returns the result which fulfils the most of them. Banning artificial super intelligence is beyond unnecessary when we don't even have artificial dumb intelligence. It would be like putting a speed limit on hover crafts on Jupiter. Sure, you could do it, but why? Why waste paper in a law book on a rule that will not be of any use for at least a couple of decades.

So I took a break in writing here and I did a quick google, apparently the Chinese equivalent to Google (Baidu) are the world leaders in AI (according to MIT). The AI they are developing is called Deep Speech. It's supposed to be a speech recognition AI, which would put it on the same level of intelligence as a 5-year old. When AI is getting to Wall-E levels of intelligence I'm ready to be worried, but that day isn't anytime soon
Debate Round No. 2
QueenDaisy

Pro

My opponent seems to have not understood my first point, and perhaps I phrased it badly. I would therefore like to clarify- a utility function is not just like the output of a regular mathematical function- the ASI I described is not merely counting the number of cakes sold and outputting that number, but rather is giving each cake sold a value of +1 in its utility function, and then behaving such that it maximises the output of its utility function- i.e., it acts such that it increases the number of cakes sold, rather than merely counting this value.

"As long as you make sure the computer the AI is hosted in doesn't have access to the Internet it will forever be stuck in it's tiny little calculator body with no ability whatsoever to kidnap people"

Many AIs that have been developed so far do more than just perform calculations. For instance, Google's self-driving car algorithms do just that- drive a car by themselves. It's all well and good to have a computerised AI which is stuck in a computer and can't go anywhere or do anything, but the types of AI which people are trying to create do more than just sit in a computer and kick out outputs.

"The AI will only have the abilities we humans give it because in the end, it's only a machine."

My opponent is underestimating the abilities ASIs would have- even one which only has access to the computer in which it is programmed could perform an SQL injection or similar hacking technique on its computer, and then from there manipulate the world such that it can escape its host computer. Even if we were to (falsely) grant that this is impossible, an ASI could still turn itself into a computer virus and then modify itself on a different computer without its original creator knowing or being able to do anything about it, and it would necessarily do so if that would allow it to give a larger output from its utility function.

"Why let humans do boring and menial tasks when you can let an AI do it instead?"

Because the AI is dangerous. It is much less risky to have a human- who would willingly be paid to go through security footage- do so than to create an AI which can do it which would also have a risk. For instance, such an AI may be given +1 for each illegal activity it identifies in the footage. Due to this, it will act such to increase the number of illegal activities in the footage, as the more there are to be detected, the more it can detect and the larger the reward it would get. An ASI designed to detect crime would also actively cause it in order to have more crimes to detect and be rewarded for.

"Also, you wanted a task an AI could do that a human couldn't. Someone to steer and maintain a space ship during space travel"

A human could just as easily remote-control a non-AI computer system via radio-transmissions. The AI is one again an unnecessary risk.

"Why impose a ban on something that isn't even close to existing? "

The debate rests on the premise that ASIs could one day be developed. There's also a concept called a computing singularity wherein a very basic program is able to improve itself, and in doing so gets better at improving itself and does so repeatedly, and the whole system follows an exponential growth and the result is an incredibly intelligent system, and that could happen tomorrow- it's not at all beyond today's technology.
KalleAnka123

Con

I don't get your first paragraph. Please define utility function. You use it very ambiguously.

As to the space travel, if a space ship is one light year away the commands from earth would take 1 year to reach the space ship because the signal (light waves) move at the speed of light. So to send a signal, and then receive confirmation that the signal was acted upon would take 2 years.

And as to the rest, you constantly mix together sentient and non sentient AI. You can have a hyperintelligent AI, but without sentience it can't make decisions on its own. Its basically a glorified script. Why would you ever give a cake calculator or a footage scanner sentience. It isn't necessary. A non-sentient AI doesn't think "Oh, I'll get a reward if see 10 more crimes, better manipulate people into murdering each other". It scans footage. If it sees movement it analyses the movement. If the movement fulfils enough criteria to be considered illegal it signals a human or a police robot or something. It's just a list of very advanced if statements.

Sentient AI on the other hand doesn't exist at the moment. I can agree with you that a sentient AI could be scary, but that's the one you put into an offline computer. It can hack away to it's hearts content, it won't accomplish anything. You'd have to give the computer limbs so that it could interact with the world, but why do that. I honestly can't think of many reasons to give a sentient AI limbs. The primary use for a sentient AI would probably be to give advise to us humans, most other tasks can be done with a non sentient AI. And there is no need to for and advisory AI to have anything but a screen and maybe some speakers. How would it ever get out of it's computer?

And since all your arguments this round have been about scariness and danger, I'll take it that I won on efficiency and usefulness.
Debate Round No. 3
QueenDaisy

Pro

I already defined "utility function" in round 2: "it has a utility function- that is, it has some criteria by which it measures the world, and acts such that it maximises the score it gives the world on its utility function". The utility function can be thought of as a mathematical way of expressing an AI's goals.

A human programmer can program exactly what they want a spaceship to do without the need for AI, before the spaceship is launched.

"So to send a signal, and then receive confirmation that the signal was acted upon would take 2 years."

In a similar way, imagine something went wrong with your ASI while it is a light year away- it would take a year or you to know about it, and, as explained earlier, your ASI would necessarily resist any attempts to change it, as changing its utility function is necessarily going to rate lowly on its utility function.

"And as to the rest, you constantly mix together sentient and non sentient AI"

There is no such thing as "non-sentient AI", and the definitions of the debate made that clear- "intelligence: reasoning and logical abilities."- something which can reason is necessarily sentient.

"I can agree with you that a sentient AI could be scary" is more or less conceding the motion- AI is dangerous, and therefore should be banned.

"It can hack away to it's hearts content, it won't accomplish anything." Once again, you're underestimating ASIs- as mentioned before, computing singularities aren't at all beyond today's technology, and if one to occur even in an isolated system, the result would be an intelligence so incredibly sophisticated and developed that it would be to us as we are to bacteria. If it wanted to escape the computer, it would certainly find a way to do so. If nothing else, it could manipulate its human programmers, as it would understand out minds so well that it would know exactly what to do to get what it wants.
This also begs the question of what use an ASI would actually be if you really did find a way to place so many safety precautions in front of it that it effectively cannot do anything.

I think we can all agree that sentient (as all AIs have to be, by definition) ASIs would be incredibly dangerous, and would either be too much of a liability to be worth having, or would have to be placed under constant supervision and endless safety protocols to the extent the ASI would be useless and ineffective.

I would like to thank my opponent for this debate, and the readers and voters for it. For the above reasons, I urge the voters to side with the motion that THW ban the development of ASIs.
KalleAnka123

Con

You claim that there is no such thing as a non sentient AI. Yet you have several times during this debate brought up what I would call non sentient AIs and called them AI. For example, in the first round you talk of a chess AI. There are many of these, I can download one as an app on my phone. No one would dispute the fact that a chess AI is an AI, it even fulfils your definition of an AI. It has logical and reasoning abilities when it comes to chess. But it is in no way sentient. It can analyse parameters and feed out a result, but it does not think. It isn't self aware and does not identify itself as an individual. It's just a program that can play chess.

Now to the space travel point. "A human programmer can program exactly what they want a spaceship to do without the need for AI, before the spaceship is launched." That is what I would call an AI. It has enough "logical and reasoning abilities" to pilot a space ship. And just to humour my opponent, even if the program weren't an AI, how would it differ if something went wrong with the program compared to the AI. They would both require the exact same amount of time to send an error report, which makes this point even more ridiculous.

My concession that sentient AI could be scary was in context of the rest of the paragraph, which clarified that careless use of sentient AI could be scary, which is why I advised tight regulation, but not banning, of the AI.

Your suggestion that a computing singularity isn't at all beyond today's technology is completely false. That would require an AI that is capable of programming. Just try to use Eclipse or any other IDE (integrated development environment) and tell me how amazing the programming skills of computers are. Whenever I code in Eclipse or Intellij's Idea I get stupid error warnings on lines of code I haven't finished yet. If an IDE isn't even able to distinguish a finished line from code to an unfinished one, how on earth is a program supposed to code a better version of itself?

Now I won't disagree with you that a potential sentient AI would be very sophisticated. But claiming that it could hack it's way out of a computer which has no internal wifi card or something similar is just dumb. It's like saying "if I enter this line of code into my toaster it'll be able to do laundry". The only way hacking could physically affect hardware is to overheat it, which would basically result in the AI committing suicide. Otherwise there is no possibility for an AI to modify its hardware by manipulating the software. Software can not affect the hardware it is contained in, nor will it ever be able to do so because that's not how software works.

And lastly we have the manipulating humans argument. I can agree with you that there are a lot of amusing movies and TV shows where manipulating humans into doing things against there will. But in reality I'd say it's nigh on impossible to make a fully sane human go against their self interest without some leverage. And since the sentient AI is stuck in a box, I doubt there is any way for it to gain leverage over a human.

In conclusion, I can not see any downside with non sentient AI. It has many uses, it is efficient and it is harmless because it does not have the ability to stray from it's task. Sentient AI on the other hand could become a problem if not handled properly, which is why I would call for certain regulations against their use. But I would not consider banning them. I would argue that their usefulness would outweigh their potential danger if handled improperly and should therefore be allowed.
Debate Round No. 4
No comments have been posted on this debate.
No votes have been placed for this debate.