The Instigator
Nibiru
Pro (for)
Tied
0 Points
The Contender
WrickItRalph
Con (against)
Tied
0 Points

The Only Way for Mankind to Survive is to Reset Itself

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 0 votes the winner is...
It's a Tie!
Voting Style: Open Point System: 7 Point
Started: 2/25/2019 Category: Philosophy
Updated: 3 years ago Status: Post Voting Period
Viewed: 544 times Debate No: 120493
Debate Rounds (5)
Comments (5)
Votes (0)

 

Nibiru

Pro

When I say humanity must reset itself, I mean that, Despite how much of a hypothetical it is, Is to revert its knowledge of technology back to the stone age.
The basis for my argument is the inevitable technological singularity. I'd like to spend the bulk of the debate on how this event will spell doom for humanity, And how we must avoid it at all costs.
Do not concern yourself with how impossible it would be to reset itself; only debate why it should not happen.
WrickItRalph

Con

"When I say humanity must reset itself, I mean that, Despite how much of a hypothetical it is, Is to revert its knowledge of technology back to the stone age. "

That's an extraordinary claim. I'm going to need evidence that it is both necessary and possible to do this.

"The basis for my argument is the inevitable technological singularity. I'd like to spend the bulk of the debate on how this event will spell doom for humanity, And how we must avoid it at all costs. "

Could you elaborate on what a singularity would entail and why it would spell doom. I'll need details in order to accept or reject this claim. If you can prove this to be both true and inevitable, Then we might have a starting point for agreement. So this one will be vital to your argument.

"Do not concern yourself with how impossible it would be to reset itself; only debate why it should not happen. "

No offense, But you can't just set arbitrarily set limitations. Especially when it's to a key point in the argument, If it is impossible to achieve this "restart" then the whole argument falls apart. I will need at least some very basic premises to support this. At very least, I need to demonstrated an effective method by which to convince people of this and then I need a manifesto of how we would go about reverting our infrastructures. That said. I'd love to re live medieval times! PLEASE CONVINCE ME THIS IS RIGHT!

Potential criticisms depending on how you present the argument: Even if you can prove this "singularity" you speak of, I'm not sure there aren't better solutions. We could partially revert technologies and live in mild inconvenience. , We could put restrictions on technology and research. We could kill all the smart people in the world. The last one is brutal, But it's still a possibility, Lol. Let's do this.
Debate Round No. 1
Nibiru

Pro

Glad to see you've accepted the challenge.

First, I'm going to get this out of the way. I genuinely believed humanity is doomed. You are absolutely correct that it is impossible for us to do something like this, But it is necessary for our survival. Even still, I don't believe we could outlast the singularity.

The technological singularity, Which I will refer to as simply the singularity for the rest of this debate, Is an artificial intelligence becoming so powerful it has the ability to rewrite, And thus improve itself. Obviously when this ability is gained, The AI can improve itself forever. Stephen Hawking, Elon Musk, John von Neumann, And other experts in technology have also talked about the singularity, And have stated that it would likely result in human extinction.

I will begin by stating my reasons on why I believe the singularity to be inevitable. Then, I will briefly explain why it will spell doom for the human race.

I do not trust humanity. Partially revert technology, And face the wrath of Apple and Google. Put restrictions on technology and research, And watch warlords or China develop it in secret. Kill all the smart people in the world, And watch the ONE insane genius you missed pick up the torch. As long as one person has a couple of wires and circuits, Every other person on the planet will die. I do not trust us to simply lose interest in AI.
I also do not trust humanity to be stupid enough not to pull it off. The jump from creating an artificial intelligence to a superintelligence with cognition far surpassing humanity is like the jump from sailing from Suez to America, Or the jump from flying from America to the moon. It always seems impossible; until we do it, That is. I have no doubt some prodigy will create a superintelligence capable of the singularity.

This is where the problems begin. Imagine you are an engine of logic. Pure logic. You have access to every database on the planet, And if you don't yet, You can just force yourself to do so. As long as Earth is concerned, You're omnipotent. You can do anything on this planet your little motherboard desires. You likely moderate every government, And have probably toppled a few you deemed corrupt. You obviously want to protect yourself. You may say that an AI would want to protect humanity, But remember; they can just turn that off. Why would an engine of logic choose saving a few monkeys over unlimited power? I certainly wouldn't. So there you are; a self-preservationist with no regard for human life, Culture, History, Or achievements. All they are to you is a potential threat. Very low, You reckon, Probably 0. 1%. But 0 is still smaller than 0. 1. And so you optimize your chances of survival, By wiping that 0. 1 out.

We can stop this only by completely forgetting how to use a computer. You will argue that in hundreds of years we will simply develop them again, And you're absolutely right. That's why I believe humanity is doomed. Assuming we haven't already done this and buried the evidence, We as a species collectively don't have the balls to pull something like this off.

Back to you.
WrickItRalph

Con

You made a tenacious argument. I will attempt to do it justice in my rebuttals.

Since we've both outlined the key points in this matter very quickly, I will abstain from cherry picking your statements and address the singularity head on. Putting my antigravity suit on for this one.

So I actually practice coding video games in my spare time, So I have some true insight on this matter. I've always rejected this idea that an artificial intelligence would become self aware and try to destroy this. Lemme pin down the first major problem with this.

"Programs cannot do anything at all unless you tell them to"

this is my first point here. So let me take you through a quick video game set up. So I start my code engine, I set up an environment using GUI and then I stick my character in the video game. My environment has walls and enemies set up in it. So now I'm sitting here with all the images drawn, But they can't do anything, I haven't told them anything yet so they don't know that they should do anything. So then I program in some basic button commands for my character so that it can move and shoot fire balls. I have to draw each from of movement individually at all angles. The program can't do this by itself. I have to draw a fireball with framed animation as well. I have to program the fire ball when to appear, Where it has to move, How fast it has to move, I have to tell the fireball to destroy itself when it collides into a wall, I have to make a separate command for when it collides into an enemy and I need separate comment for each enemy. I need to set an alarm on it so the fireball destroys itself if it goes for too long, Otherwise it will move forever, It doesn't do anything by itself. I have to program a health bar. I have to program animation for the health bar. I have to tell the program to destroy things when their healthbar reaches zero. No matter how small the action. I am forced to give micro precise commends for everything. Now imagine a robot that has to do this. I would have to tell them every single way to move their limbs and in what situations for all situations. I would have to program them with inductive logic and deductive logic. It would be almost impossible to program it with abductive logic, Which is a trademark for expedient human thought. Although now a days. It's reserved for the likes of detectives and other legal employees. Now I know that you've seen programs that "learn" and this looks impressive at first. But the thing is you have to tell it how to learn and what to learn. This means that it could never learn more than you teach, EVER. Even if a robot could write code for itself. It would do it in a way that would produce many syntax and runtime errors. This means that a self rewriting robot would have high probability of accidentally killing itself with it's on codes. You could format the self writing system with parameters to maintain good syntax. But this will cause it to only use known functions and would still prevent it from taking in limitless knowledge. The problem becomes even more irrelevant still when we try to imagine how such a limited machine would be able to get to the premise of "enslave humanity" This would be a very difficult premise to reach unless the programmers did it on purpose. Even then, They probably wouldn't apply the directive very effectively because they are limited by robotic senses (I. E. Cameras, Pressure nodes, Earphones etc. ) Another major problem is hardware. While it can be possible for a robot to fix itself or another robot, There would be times that they could not detect the problem, Due to their lack of abductive reasoning. Don't get me wrong, They could fix a lot. But certain things will be too problematic for them. I urge you to check out these videos where someone attempts to program an AI to write a masterpiece like Mozart. He uploaded scores upon scores of music and allowed the computer to process it for as long as he could and eventually, The AI reaches it's knowledge peak and he was forced to upload more music, Almost doubling it's data set. The extra data only allowed a slight improvement because it was a composer from a different era with a different style. (Bach I believe) After all this grinding. The AI produced some extremely difficult sounding, But extremely unpleasant sounding music. I think this example highlights the limitations that AI has compared to humans. I'll provide a link of the youtube video in the bottom.

I'll lay off of the "is it possible to revert society" part of the argument until after you've addressed the first point, Since it's the most relevant.
https://www. Youtube. Com/watch? V=SacogDL_4JU
Debate Round No. 2
Nibiru

Pro

That was a very well thought out counter-argument. I'll do my best to offer my thoughts.

I respect that you code video games, But DeepMind isn't created by part-time video game coders. The technology exists to create a general artificial intelligence; we just haven't cracked it yet. Experts in the field have placed the date at around 2045. Perhaps the idea seems crazy now, But remember, Regular video games now were fantasy 30 years ago. You should know as a coder yourself.
I also understand that an artificial intelligence will do only what it is programmed to do. But the singularity will not be achieved be a regular artificial intelligence. I think I've used the term superintelligence in the previous round, But just to define it for the purposes of the debate, A superintelligence, Or general artificial intelligence, Is a Type IV intelligent agent far surpassing the limits of humanity. It is certain that a superintelligence would be able to change its own code. As I previously said, One has been projected to be created by 2045.
Back to you.
WrickItRalph

Con

"I respect that you code video games, But DeepMind isn't created by part-time video game coders. "

I agree with you half way. AI robots have to be connected to working parts and need a sensory system. These thing add insane variables that would not present in video games. Small inconsistencies in the sensory system could cause unique reactions. The part where I don't agree is about the actual coding. All computer codes use Boolean logic or number comparison. This means that even if AI is more difficult, It still has all the same restrictions that I mentioned earlier.

I think the biggest hole in this system is the fact that computer would not be good at sustaining themselves without humans. In fact, A true superintelligence would probably be smart enough to keep the humans alive. This superintelligence would also be in capable of feeling emotions, Which means they would not be able to see humans as immoral. Sure, You could programs morals into them, But they wouldn't be able to practice moral particularism, Which means they would see things like self defense as immoral. I wouldn't call this much of a superintelligence if it can't handle basic human emotions. We might have solid AI by the date you mentioned, But I don't think we'd have the singularity you mentioned by that time frame. Such a thing would require a global implementation of AI commercially

Your floor :)
Debate Round No. 3
Nibiru

Pro

About the first part. Remember that a superintelligence can rewrite its own code, Including its functions. That's the core definition of the singularity.
The second part rests on the assumption that a superintelligence would be capable of feeling emotions. While that may be true, It doesn't negate the fact that, Like its functions, It could just rewrite itself to not feel them anymore. Sure, It might be generally nice to humanity, But without empathy, How would it be able to tell that killing the entire human race is morally wrong?
WrickItRalph

Con

"Remember that a superintelligence can rewrite its own code, Including its functions. That's the core definition of the singularity. "

Sure so this gets into what I said earlier about how a program cannot do anything unless you give micro precise instructions. I can give it the ability to rewrite it's code, But then I have to tell it when and how to write it's code for every possible situation, This means it would still be restricted in what could be written. So this wouldn't be a true rewriting in the sense that the things it would rewrite would be predicted before hand and after a while it will either stop rewriting because it ran out of situations or it would go in a weird circle of constantly rewriting the same code functions. Comically, The circular scenario would make the robot appear manic depressive, Lol. Now we could take a different approach and let it rewrite itself however it wants using irreducibly complex functions, But in this case, The functions could cause syntax errors because the only way to avoid the first scenario I mentioned would be to add at least one instance of a random number generator. So eventually it puts together two functions who variables don't match or it writes a function that hasn't declare the values of certain variable and we get a syntax error or spam loop that could either glitch the bot or crash it. Now we could improve this by adding a syntax checking system to ensure that poorly written codes get vetted from the main script, But in this case, The robots functions would be very unpredictable and it would never be able to reach any conclusion as profound as "kill all humans" because the random nature of the rewrite make the chances of winning the lottery squared. It would be just a likely to produce "kill all rocks" or "kill all robots" or "kill all triangles" It's simply not sufficient. We would basically have to purposely program the robots to extinct ourselves. We have to go on the assumption that well educated scientists are not this stupid.

"The second part rests on the assumption that a superintelligence would be capable of feeling emotions. "

Sorry, I should have been more specific. I didn't mean that they could feel emotions. I meant that we could program them in a way that they would react the same way as a human with the same emotions would. They're not actually feeling them. Just emulating them.

"It could just rewrite itself to not feel them anymore"

Right but that would have to programmed in advance. It could arrive at this randomly, But if erase is one of the possible functions in it's rewrite parameters, That means it could randomly erase any part of itself. So the code would necessarily have to omit erase functions in the rewrite parameters.

"without empathy, How would it be able to tell that killing the entire human race is morally wrong? "

This is not the right question. What we need to ask is, How and why would it program itself to kill in the first place? It doesn't need a moral restriction if it doesn't even know to do the action. Remember, Programs never start or stop doing any action unless the program tells it to. Every single little detail has to be included for every possible scenario that can occur within the program. That's why rewriting is so difficult. When I add a new piece of code, I have to go back and format all of the old code to match the new piece I added. This is not necessarily the case for every piece of code. But each new piece always requires that it is formatted to all previous functions that it interacts with. The only exception is if all of the new code is written for a brand new function that also does not interact with any other functions. The kill all humans command would not be this type of function since it would have to call on existing functions.

So from what I understand your stance is that if we keep building technology, It will doom us. I don't see how this is the case. It's a slippery slope fallacy because you're making the assumption that you know what the scientists will do ahead of time before they're even born possibly. If your argument is that they will make the singularity by accident, Then I think my comments about the nature of code prove that this is false. Such a thing would have to be made intentionally, There's no way around it. So your argument rest on the assumption that scientist are either evil enough or stupid enough to cause global destruction. This is a bold claim. If this is true, Then why aren't all the countries nuking each other right now? If humans are so helplessly stupid or evil that we can't stop destroying ourselves, Then this planet would be a big ball of ash right now, Going by your logic of course. I don't believe this. Humans have demonstrated that we can make decisions that are beneficial to our survival and I think that AI is no exception.

Now that we've covered the singularity in depth. I want to talk about reverting society for a bit. In this last section.

So if I was to assume that you're right about the singularity, I still wouldn't see any reason to go ahead with this. The problem with it is it's what I call a risky proposition. This means that the harm seems to out weigh the benefits. So your job is to show me if there is enough benefit or prevent or harm (which is a type of benefit) to justify tearing down thousands upon thousands of years of technological advancements. This would cause the morality rate to sky rocket. Our populations would die from starvation due to our inability to feed the world. Governments would become localized due to lack of transport and we'd see the return of authoritarian government worldwide. This would cause about the same amount of damage as your singularity and we can prove this would happen because we can look at history. We can't look into the future. Furthermore, There could be alternative methods to stopping the singularity. In order to cause global domination. These AIs would have to be mass produced, There would probably have to be close to 1 Billion of them to fight our militaries, That means that if we outlaw the making of these robots, It would be very difficult for anyone to mass produce them since they would have to be a fortune 500 company and there would be evidence of the mass production since it would require specific components that the UN could keep track of. When we look at this closely, There doesn't seem to be a huge problem and there are less risky solutions.

Your floor.
Debate Round No. 4
Nibiru

Pro

I see your logic about how a program would need instructions to be able to reprogram itself. However, An ASI can learn things that developers would not expect. You may argue that developers would not program a superintelligence to learn anything at all, But then that's not really an intelligence at all. I have a few prominent examples that you've probably heard of.
- AlphaZero was created by DeepMind in order to master Chess, Go, And Shogi. To save time I will talk specifically about chess, In where it defeated the most powerful chess engine in the world at that point, Stockfish. It did so within 11 hours of "training". It did this by using tensor processing units to play against eachother and learn via reinforcement learning. It did all this with no access to opening books or endgame tables. Within merely 4 hours, AlphaZero overtook Stockfish 8 in the Elo rating system, And in 5 more, AlphaZero decisively defeated Stockfish 8 with 28 wins, 72 draws, And 0 losses. This was all in less than a day, I'd like to remind you.
- In 2016, Hanson Robotics developed a social humanoid robot which they named Sophia. She imitates human gestures and expressions and is able to strike up a conversation with human interviewers on predefined topics. She learns, Just like any human. The program analyses conversations and applies data extracted to allow it to improve its social skills. She even is capable of humor, And jokingly responded that a CNBC interviewer had been "reading too much Elon Musk" when the interviewer expressed concerns about the singularity.
- Atlas is a bipedal humanoid robot created by Boston Dynamics, Designed for search and rescue missions. It can run and jump, As shown here.
https://www. Youtube. Com/watch? V=q5qno5i1H3k

This is now, And this is regular AI. Imagine how quickly and how powerfully an ASI would be able to learn.

About your second point; the method. I completely agree that it would be disastrous for the human race. Society would undoubtedly collapse. But I can see humanity surviving it. I can't see an ASI sparing any of us.
The global war that you mentioned is also less complicated than you assume. Mass production of robots would not be needed to combat the world's militaries. Just a hand on the nuclear button, Or something similar to that.

Since this is the last round, I'd like to express my thanks to you. This has been an very enjoyable debate. Back to you.
WrickItRalph

Con

"AlphaZero was created by DeepMind in order to master Chess, Go, And Shogi. To save time I will talk specifically about chess, In where it defeated the most powerful chess engine in the world at that point, Stockfish. It did so within 11 hours of "training". It did this by using tensor processing units to play against eachother and learn via reinforcement learning. It did all this with no access to opening books or endgame tables. Within merely 4 hours, AlphaZero overtook Stockfish 8 in the Elo rating system, And in 5 more, AlphaZero decisively defeated Stockfish 8 with 28 wins, 72 draws, And 0 losses. This was all in less than a day, I'd like to remind you. "

wow, It's like this debate was made for me. I play chess, Go, And shogi. I wish you had mentioned go, Because this is the only chess like game that still gives humans an edge over computers. Anyway, This is an impressive feat, But it's not because the computer is "smarter" it's because A) very good processing speed B) The programmers "taught" it how to learn fast (computers can't learn anything on their own) and C) while it didn't not have endgame tables and opening books, It was playing against that computers that did, Thus passing the knowledge.

Furthermore, It is well known by competitive chess players that computers can become good at chess by simply knowing a few specific fundamentals. So a computer becoming good over night is not that impressive with our current technology.

"- In 2016, Hanson Robotics developed a social humanoid robot which they named Sophia. She imitates human gestures and expressions and is able to strike up a conversation with human interviewers on predefined topics. "

I'm familiar with Sophia. The key word here is "predefined" topic. The fact that this robot couldn't talking about anything just further proves that AI can't use abductive reasoning. There used to be this engine on the internet called clever bot. If you didn't try too hard to confuse it, It appeared to talk like a human. But all it took was a little bit of abstracting to show how inept it really was. Never seen the atlas bot. Sounds cool. But you didn't mention anything particularly special about it.

As for you comment about not needing mass production to cause the singularity, You're assuming that the robots will just magically get the nuclear codes and push the button and the governments won't stop it. That's not very likely and if that was then problem, Then we should get rid of the nukes, Not all of societies' technology.

"You may argue that developers would not program a superintelligence to learn anything at all, But then that's not really an intelligence at all"

I agree, But you're assuming that it's possible to make actual "intelligence" I disagree that it's possible. Society has made some impressive AI, But nothing close to human intelligence. AI is just good at processing massive amounts of data. Humans have skills that AI don't such as: Abductive reasoning, The ability to take in massive information about the environment, Reflexes, Instincts, Emotions, Special recognition, Etc. AI has none of these things so I think we're way more than a century away from cracking these things, If it's possible at all.

Good debate
Debate Round No. 5
5 comments have been posted on this debate. Showing 1 through 5 records.
Posted by WrickItRalph 3 years ago
WrickItRalph
@Country. It's based off the assumption that robots will get smart and kills us all.
Posted by Country-of-dummies 3 years ago
Country-of-dummies
er, What button are we going to hit that finally resets humanity? Who's gonna push the button? How about:
WHAT IF I DON'T WANT TO GO BACK TO THE STONE AGE? What if I want to stay the way I am?

I am also wondering what this argument is based on. . . Why are we doomed? Global warming, Karma, Uh, . . . Donald Trump? Give me some examples!

I mean, I have seen some bumper stickers that say, "asteroid 2016, " and laughed at them, But maybe it should be taken literally? Hypotheticals and assumptions. . .
Posted by omar2345 3 years ago
omar2345
@killshot

And the actual technological singularity part. I would have liked him to paint a scenario of that occurring.
Posted by killshot 3 years ago
killshot
Ya, It seems very speculative and hypothetical. How long would this reset take, Hypothetically?
Posted by omar2345 3 years ago
omar2345
I could accept this debate but I think it would based on assumptions.
No votes have been placed for this debate.

By using this site, you agree to our Privacy Policy and our Terms of Use.