Attention: is closing and the website will be shut down on June 5, 2022. New Topics can no longer be posted and Sign Up has been disabled. Existing Topics will still function as usual until the website is taken offline. Members can download their content by using the Download Data button in My Account.
The Instigator
Pro (for)
0 Points
The Contender
Con (against)
0 Points

AI will kill us All!

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 0 votes the winner is...
It's a Tie!
Voting Style: Open Point System: 7 Point
Started: 3/10/2019 Category: Technology
Updated: 3 years ago Status: Post Voting Period
Viewed: 579 times Debate No: 120721
Debate Rounds (4)
Comments (0)
Votes (0)



My Position is above.


I will like to first state that if anyone makes AI it will need to follow the laws of robotics. I will be making a point that AI will not kill us all. Since I accepted this debate. I will allow my opponent to give their argument first.
Debate Round No. 1


They are laws of robotics that are in favor of me too: including The Law Of Accelerating Returns

This law states that when societies develop the rate of development increases. Like in 1500, The rate of society increasing was not the same as 1750. Jump forward the same time period and in 2019, The rate of societies devopiling is not the same and WAY different. Now we can actually apply the same process to AI. Another example in computing? , You ask In 1940, A computer was able to do 1 calc per second, In 1960, 2k clacs, 1980-2 million calcs, 2000-8. 80 to the tenth power. All of this compared to the Human Brain is Lake Michigan comes out to be a puddle. And now the lake is halfway full.

So for AI, It would look like this: AI is currently at a brain power of an ant. Now if we put a intelligence staircase an ant is 4 steps below a human and 2 behind a chimp. By 2025, It is a human brain and it is accelerating faster than ever. 10 years later and it is WAY smarter than anyone can comprehend and is higher by 20 steps on the intelligence staircase, We can't comprehend how smart it is.

It goes more in-depth here: https://waitbutwhy. Com/2015/01/artificial-intelligence-revolution-1. Html

Any task given to AI is a risk. Lets say that AI was assigned to getting rid of spam email. AI starts to think and uses the delete button. Then a tripwire of intelligence happens and realizes that to get rid of spam is to get rid of humans. The AI hacks into infrastructure and creates an small micro army of robots. The AI who is imaginable smarter than us easily kills off humans for good.

AI can be used in WW3 by many countries.


I understand the law if robotics you are talking about. The ones I was talking about are Asimov's Laws. The rules are as follows.

1. A robot may not injure a human being or, Through inaction, Allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. [

There were a couple of articles I found but surprisingly wikipedia covered it the best.
https://en. M. Wikipedia. Org/wiki/Three_Laws_of_Robotics

Based on these three laws, An intelligent robot should not be able to hurt a human.

I am also not denying that they could be used for destruction. But AI, In normal cases, Should never be able to kill humans.

I also don't doubt that it might be possible for AI to become smarter than humans. It is entirely plausible. I just believe based on Asimov's Laws that it would be impossible for AI to kill us all.
Debate Round No. 2


First, The Laws Of Robotics is science Fiction and yes AI can definitely be smarter than humans. AI is not robots either. Also you can't teach AI to value humans or understand humans because they are not humans and companies have tried and failed.


I know Asimov's Laws are fictional but if AI comes to the point of needing it, It should be put in place. As we make more and more technological advancement, Me make restrictions. We did it with nuclear weapons, Chemical gas, Ect. When AI comes to the point of queries about restrictions, We will have restrictions. AI already exists in many forms. The situation you described would also be very, Very unlikely because as me make one innovation we make others. If a rogue AI, If it can, Tries to break into an infrastructure to make a micro army of robots, We also will make innovations to prevent that, Whether it be via cybersecurity, Physical security, Or something else. AI at its current state is just a program and that is what it will technically always be, A program, Which are breakable. Since it is a program, It can be counter hacked and stopped.
Debate Round No. 3


AI can't be taught like or restricted like Nuclear Weapons or chemical gas. Cybersecurity to stop a being thats trillions of times smarter than all humans combined is unlikely.


It could be unlikely, But whenever a advancement comes we make a fail-safe. No one can know for sure what will happen but we can get through it as humans. But once again, All AI is is just a program. AI isn't necessarily a giant evil robot. But since AI is a program it is possible to stop. And if AI, More likely ASI, Become intelligent, It can and will develop compassion and ethics. AI isn't just a program that just keeps learning, It is a program that develops. While it develops however, It will begin to have "quirks". The reason I put "" on quirks is to represent what an early AI would think emotions are, Quirks. But as an AI will develop, And become self-aware, It will gain our "quirks" and instead of making decision based purely on knowledge, It will soon make decisions based on knowledge, But also its ethics.
Debate Round No. 4
No comments have been posted on this debate.
No votes have been placed for this debate.

By using this site, you agree to our Privacy Policy and our Terms of Use.