The Instigator
Pro (for)
Anonymous
Losing
0 Points
The Contender
WrickItRalph
Con (against)
Winning
3 Points

Artificial Intelligence will kill us all

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 1 vote the winner is...
WrickItRalph
Voting Style: Open Point System: 7 Point
Started: 3/22/2019 Category: Technology
Updated: 3 years ago Status: Post Voting Period
Viewed: 2,057 times Debate No: 120954
Debate Rounds (4)
Comments (57)
Votes (1)

 

Pro

Second attempt at this. And no, AI is not big scary robots.
WrickItRalph

Con

My argument will be that AI has huge restrictions and can only learn what it is programmed to learn. I will be drawing on my coding experience to help with my argument.

It will logically follow, That an AI can only kill us if it is willfully programmed to do so.

I will argue against any claims that it will destroy the economy or cause poverty, Starvation, Etc.

It will logically follow, That people won't die simply because the nature of business has changed.

I will argue that if AI can only be made to kill us by willful programming, That people will be unlikely to do this and even if they were, There would be more effective ways to do this.

It will logically follow that my opponent would have to put all of these more effective ways in the same category as the AI and would have to take the same stance on them to be consistent.

I will argue that AI cannot maintain their own hardware without humans.

It would logically follow that at least some humans would have to be kept alive for the AI to continue on.

I will argue that AI will never be able to use intuition or abduction the way humans do.

It will logically follow that they will have limited faculties compared to humans.

I will argue that AI would not have the capacity to kill us even if they did revolt.

It will logically follow that they would not be able to kill us all if this was the case.

Your floor.
Debate Round No. 1

Pro

Now before we start any of this, I need to say that "every species dies sometime" is about as consistent as "Every human dies sometimes". So right off the bat, Theres huge potential.

Any task given to AI is a risk. Lets say that AI was assigned to getting rid of spam email. AI starts to think and uses the delete button. Then a tripwire of intelligence happens and realizes that to get rid of spam is to get rid of humans. The AI hacks into infrastructure and creates an small micro army of robots. The AI who is imaginable smarter than us easily kills off humans for good.

AI will kill humans through their goal and to start they will become obsess with their goal.
its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators"your GPS"s goal is to give you the most efficient driving directions; Watson"s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, It will inherently develop the wisdom to change its original goal"but Nick Bostrom believes that intelligence-level and final goals are orthogonal, Meaning any level of intelligence can be combined with any final goal. So AI can come from a simple Ant brain who really wanted to be good at deleting spam to a super-intelligent ASI who still really wanted to be good at deleting spam. Any assumption that once superintelligent, A system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get "over" things, Not computers.

A lot of people can say oh well there can be restrictions. Restrictions however contradict their original goal and the AI sees it as a threat to their survival.

AI can totally kill humans. It can hack into ANY infrastructure it wants undetected. It can pretty much control all computers, Systems and everything.

USA is already trying to make AI tanks, So the war potential is there too.
WrickItRalph

Con

To the first paragraph:

I reject that these statements are even. On species dying has exponentially more variables to one person dying.

To the second paragraph:

I don't mean this to be insulting, But your description of how AI works is entirely inaccurate. Programs can only do what they're programmed to do. It is impossible for them to give themselves knew programming unless they were already programmed to do it. You also don't understand how commands work with AI either. The AI does not get a broad function like "delete all spam" Each action would have it's own function. This means it would have to, Locate the computer with hardware. Move up to it. Use hardware to scan the webpage and get to emails, Which would require a complex series of commands that allows it to flip and then judge webpages to get to emails. Then it has to finally use hardware to find the delete button and press it. It is plain to see that these individual commands do not apply to "deleting humans". Furthermore, The AI does not have the option to change this code around. The only way this program command could kill humans, Was in the scientist who made it programmed it to start of delecting spam and then slowly evolve it self to kill humans. Which means it was programmed to kill humans the whole time. There is literally nothing that the program can do without the programmers knowing.

To the third paragraph

This paragraph relies on your spam reference from earlier, So it's debunked by default. Furthermore. The "goal" you speak of has to come from the programmer. It's true that the intelligence is orthogonal, But it doesn't matter because the goal never changes. If the goal was programmed to change, Then the goal was to change the goal. You see the problem here? This all draws back to the inevitable truth. AI can only cause extinction if we program it too. If you accept that. Then we have to take away all dangerous weapons on earth, Because your argument would be that humans are the ultimate problem.

To the fourth paragraph

This is false. If you understood how if statements worked in coding, You would know that a restriction cannot be disobeyed unless, And here it is again, The programmers program it to do so.

To the fifth paragraph

This is also false. It could only do it if it was programmed to do it. Furthermore, Being an AI doesn't automatically give it access to the worlds computers. It still would get stopped by firewalls and VPNs just like a person and it could never launch nukes because those are on a private system because the government is not quite that stupid. Even if the AI did drop the internet. We could just create a new network with brand new types of scripts that the AI wouldn't know and the AI would never be able to know it unless a person updated it, Which means that people are the problem, Not AI.

You said before that nobody ever gave you a good argument against this. Are you sure about that? I think that you might be a little too incredulous towards the critiques, Because I haven't even got halfway through all the proofs I could present. I simply did it this way because if I start writing lines of code we're all gonna fall asleep.
Debate Round No. 2

Pro

According to Yale University Press-, More than 99 percent of all species, Amounting to over five billion species, That ever lived on Earth are estimated to have died out. Again, Theres no denying that there is potential. Even thought there are more variables, That doesn't excuse the potential here.

Yes, Programs do what there programmed to do and they become obsessed with their goal, You claim that" the AI does not get a broad function like "delete all spam"". Your missing what AI WILL BE. Ai is self-learning. For example, A game called Mar. Io, The AI is programmed to use the buttons and complete Mario, ON ITS OWN. It gets good, Then really good, Then inhumanly good. That was made by one guy. Imagine 30 years down the line and full computer teams. Artificial Intelligence is insanely smart and will become obsess, They want to self-learn and finds out they can kill humans. AI self learns.

The goal isn't to change the goal. Getting rid of spam is getting rid of humans. Any task given to AI is a risk, And since AI is learning it will find this out.

The purpose of AI is to imitate humans and become super. So when we achieve that, AI will have threats to its survival.

When will you learn that AI SELF LEARNS.

Ok, VPNs and firewalls can't stop an insanely smart super computer, Here this how smart AI will be.

AI in around 20-40 years will be like this- currently humans and chimps have a difference in IQ of around 2 steps on a intelligence staircase. Wheres AI? 50 STEPS ABOVE HUMANS. This is way beyond comprehension. AI that smart should have no problem killing off the human race, Owning all infrastructure and hey, Some claim that they could own every atom in the world. In our world, Smart means a 130 IQ and stupid means an 85 IQ"we don"t have a word for an IQ of 12, 952. What we do know is that humans" utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, When we create it, Will be the most powerful being in the history of life on Earth, And all living things, Including humans, Will be entirely at its whim.

Sorry but my opponents have been 2 forfeits all the time, One idiot who referred to a sci-fi book and a troll. Not great opponents.
WrickItRalph

Con

It still doesn't compare. There's still too many variables to compare it. Furthermore, It's not really relevant to the claim. It's already known that human extinction is inevitable, But there's no evidence that it will be from AI.

You said:
"Yes, Programs do what there programmed to do and they become obsessed with their goal"

They don't become obsessed, Because they don't have preferences. The perform the goal and they do it exactly the way they're instructed, If you program it to learn, It learns exactly how it was instructed to learn. That means it can never learn anything that the programmers don't want it to. There is no way around this. The only way you can make this is argument is if you say that humans will do it on purpose. You're not saying this because you know that puts your argument dead in the water.

You said:
"Your missing what AI WILL BE. Ai is self-learning"

Only because you're defining your version of AI like that. You're arguing for a science fiction version of AI. That means you have no evidence because you're arguing for something that hasn't been invented. If you even try to use real AI in your examples, Then you're arguing for something that can't self learn. So all that means is that your fiction AI is unsubstantiated and you can't even explain in technical terms how a computer could self learn because no one can. Even the greatest programmers in the world can't figure out how to make a robot learn things without telling it what to learn.

You said:
"Getting rid of spam is getting rid of humans. "

Nope, Not even a little bit. Getting rid of spam is an extremely long and specific list of commands that only work on the specific buttons of the specific computers in the specific email providers with the specific format of emails that are designated. None of these commands can make the computer do anything except delete emails. The only way this would work was if deleting the email somehow accidentally killed a human.

You said:
"The purpose of AI is to imitate humans"

Imitate being the key word here. Something can only be imitated if it learns it from something else.

You said:
"Ok, VPNs and firewalls can't stop an insanely smart super computer, Here this how smart AI will be. "

You mean the imaginary AIs? It doesn't matter how "smart" it is. It has no way of knowing the encryptions. It would have to be psychic. So what's it gonna be? Are you going to add psychic onto the list of fictional things your AI can do?

To your statement about IQ. It is not possible to assign an IQ to and AI. This number only work for humans because the test applies to a humans intuition and abductive reasoning. These are two things that AI cannot have because they are restricted to commands. A computer can only be measured by how well it does it's objectives and that's it. There is no comparison here. Humans are infinitely more intelligent than computers. The only thing that computers do is process data faster and they're good at it. But humans can do anything they want with the data and the AI can't.

Well that's understandable. I guess it's not a frequently debated topic. Ironically, This is my second debate on the topic. The other guy didn't call it AI though. He called it "the singularity. "
Debate Round No. 3

Pro

There is plenty of evidence that AI will kill humans, Its still comparable regardless of the variables. What variables are you talking about? It has stayed consistent

No matter how smart the AI is the goal is the same, But as AI develops their want a desire to fill their goal more. They understand sub goals and more options

Did you read my example:A game called Mar. Io, The AI is programmed to use the buttons and complete Mario, ON ITS OWN. It gets good, Then really good, Then inhumanly good. That was made by one guy. Imagine 30 years down the line and full computer teams.

Definition- In computer science, Artificial intelligence, - sometimes called machine intelligence, Is intelligence demonstrated by machines, In contrast to the natural intelligence displayed by humans and other animals

does humans self-learn-yes, When ASI is created, Will it self learn- yes

How is it science fiction?

Yes AI is challenging but how challenging? - You see Ai is currently at the brain power of an ant. Many people like to look at AI in the future and compare it today, But thats not true. AI will develop faster and faster and faster. This is because of the Law of Accelerating Returns. It goes like this: This law states that when societies develop the rate of development increases. Like in 1500, The rate of society increasing was not the same as 1750. Jump forward the same time period and in 2019, The rate of societies devopiling is not the same and WAY different. Now we can actually apply the same process to AI. Another example in computing? , You ask In 1940, A computer was able to do 1 calc per second, In 1960, 2k clacs, 1980-2 million calcs, 2000-8. 80 to the tenth power. All of this compared to the Human Brain is Lake Michigan comes out to be a puddle. And now the lake is halfway full.

So for AI, It would look like this: AI is currently at a brain power of an ant. Now if we put a intelligence staircase an ant is 4 steps below a human and 2 behind a chimp. By 2025, It is a human brain and it is accelerating faster than ever. 10 years later and it is WAY smarter than anyone can comprehend and is higher by 20 steps on the intelligence staircase, We can't comprehend how smart it is.

So when we hit human level, We shouldn't have a problem with ASI. But how do we hit human level thats accesible? -Plagiarize the brain. The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing"optimistic estimates say we can do this by 2030. Once we do that, We"ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor "neurons, " connected to each other with inputs and outputs, And it knows nothing"like an infant brain. The way it "learns" is it tries to do a task, Say handwriting recognition, And at first, Its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it"s told it got something right, The transistor connections in the firing pathways that happened to create that answer are strengthened; when it"s told it was wrong, Those pathways" connections are weakened. After a lot of this trial and feedback, The network has, By itself, Formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, And as we continue to study the brain, We"re discovering ingenious new ways to take advantage of neural circuitry.

AI will learn that the best way to delete spam would be to delete humans. Any task given to AI is a risk. Elon Musk said this himself.

Yes, We plagiarize the brain but it will be a greater human.

It is very possible to assign an IQ to AI when we look at the develop adjusted with The Law of Accelerating Returns AI will get super smart. AI isn't just restricted to commands, It can understand the world, Self-learns, And be very smart.

Good debate!
WrickItRalph

Con

There is no evidence. You're just asserting there's evidence. Give me one example of a line of code that could do this. Just one. I'll bet a million dollars that the code you present would have to be specifically programmed to do it.

The AI doesn't develop. It just does what you tell it and it doesn't try to do it better or worse. If it learns, It's because learning was one of it's goals. But the programmer has to tell it exactly what to learn, So it's REALLY learning. That's why it's called Artificial Intelligence and not just Intelligence.

That example isn't really that impressive. The programmer had to give it all of the tools to get good at Mario. The program would never be able to kill people or do anything other than play Mario. There is also something called learning gradients. After a while, The AI can't get any better and just becomes a regular computer program. That means there is a limit to how much intelligence an AI can have. Now you could theoretically have it "learn forever" But you can only do this is you program it to unlearn things during the process to create a circular of programming, Which would create nothing more than a manic depressive robot (AI humor)

You can't just say that it will self learn without supporting it with evidence.

It's science fiction because in real life, Programs don't learn things.

You can't prove it's at the brain power of the ant, Because you have no way to know the limits of AI. For all we know, AI has already met it's limit.

Accelerated returns does not account for knowledge ceilings. We don't have the knowledge to go any further with AI. In fact, Our ability to control AI so well suggest that a computer would never become advanced enough to break it's code.

In summation, I feel the evidence doesn't live up the claim. What you've suggested is something that cannot even be abstracted in a lab, Let alone tested. Honestly, I was expecting you to argue from the standpoint of humans doing it intentionally because that's the only situation that can really meet the burden of proof in this case.

Good Debate.
Debate Round No. 4
57 comments have been posted on this debate. Showing 1 through 10 records.
Posted by WrickItRalph 3 years ago
WrickItRalph
@melcharaz.

well if you have a way to make an algorithm give learning, Please share. Please explain how a set of numbers that is only designed to give specific commands goes from doing the only thing that it can do to allowing free thought. This is the part where people mess up. They don't actually take the time to consider what actually has to happen to make this scenario real.
Posted by melcharaz 3 years ago
melcharaz
hmmm, It would have to depend on how the algorithm works and what functions and inputs it follows. I believe that robots could be designed to have a logic function in this way: Spam causes inconvenience and potential determent to human life. Human behavior causes inconvenience and potential determent to planet. Planet includes robots. Contain or destroy human to preserve earth.

It also depends on what sort of information it has access to and how its programmed to interpret such data.
Posted by WrickItRalph 3 years ago
WrickItRalph
You said:
"ASI could allow us to conquer our mortality. "

Not in your wildest dreams. The matrix and real and you don't get immortalized just because you get outlived by a fancy computer program. We're flesh bodies and carbon doesn't stick together forever as history has shown, So there will be no conquering of mortality unless you figure out how to fight entropy (good luck with that)
Posted by WrickItRalph 3 years ago
WrickItRalph
So many comments, Yet so little said.

At least you started producing some data finally. That's an improvement, You still fell short though.

You said:
"Unlike the human brain, Computer software can receive updates and fixes and can be easily experimented on. "

I'm already explained why computers can't actually do this, But I'm going to focus on the first part where you said "unlike the human brain". The human actually does "update" itself. You should research how memory works and you'll see what I mean. As for fixing, An AI with a "broken brain" will most likely lose basic motor function (pun intended) and will not be able to fix itself, So it will need a robot doctor, Which is just what humans do, So where's the superiority. You presented arguments that robots can make or exceed the human sensory and data processing abilities, But you did not get to a proper intelligence. Fast and accurate isn't good enough. There needs to be intuition. , Abduction, Nuance, Particularism. Things an AI can't handle.
Posted by DeletedUser 3 years ago
DeletedUser
Their my little rant is over and please go read this

https://waitbutwhy. Com/2015/01/artificial-intelligence-revolution-2. Html
Posted by DeletedUser 3 years ago
DeletedUser
ASI could allow us to conquer our mortality.
Posted by DeletedUser 3 years ago
DeletedUser
"All species eventually go extinct" has been almost as reliable a rule through history as "All humans eventually die" has been. So far, 99. 9% of species have fallen off the balance beam, And it seems pretty clear that if a species keeps wobbling along down the beam, It"s only a matter of time before some other species, Some gust of nature"s wind, Or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state"a place species are all teetering on falling into and from which no species ever returns.

And while most scientists I"ve come across acknowledge that ASI would have the ability to send humans to extinction, Many also believe that used beneficially, ASI"s abilities could be used to bring individual humans, And the species as a whole, To a second attractor state"species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, I. E. If we manage to get there, We"ll be impervious to extinction forever"we"ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it"s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
Posted by DeletedUser 3 years ago
DeletedUser
Superintelligence of that magnitude is not something we can remotely grasp, Any more than a bumblebee can wrap its head around Keynesian Economics. In our world, Smart means a 130 IQ and stupid means an 85 IQ"we don"t have a word for an IQ of 12, 952.

What we do know is that humans" utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, When we create it, Will be the most powerful being in the history of life on Earth, And all living things, Including humans, Will be entirely at its whim"and this might happen in the next few decades.

If our meager brains were able to invent wifi, Then something 100 or 1, 000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, At any time"everything we consider magic, Every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, Curing disease and hunger and even mortality, Reprogramming the weather to protect the future of life on Earth"all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we"re concerned, If an ASI comes to being, There is now an omnipotent God on Earth"
Posted by DeletedUser 3 years ago
DeletedUser
It takes decades for the first AI system to reach low-level general intelligence, But it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, Within an hour of hitting that milestone, The system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, Something no human has been able to definitively do. 90 minutes after that, The AI has become an ASI, 170, 000 times more intelligent than a human.
Posted by DeletedUser 3 years ago
DeletedUser
ANI-AI now
AGI-AI human level intelligence
ASI-Superintelligence
1 votes has been placed for this debate.
Vote Placed by 9spaceking 3 years ago
9spaceking
AnonymousWrickItRalphTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Reasons for voting decision: pro had bop and con convinced me that pro didn't manage to link learning mario to killing people

By using this site, you agree to our Privacy Policy and our Terms of Use.