Self Aware AI is possible
Debate Rounds (4)
1st round acceptance
2nd opening argument/no rebuttals
3rd rebuttals only
4th closing statement
My basic position is that the AI that many foresee that will be just like human, or even better, cannot happen. It is hard to ask for much proof in this matter as it is speculative, so don't be too intimidated.
Please ask any questions in the comment section.
First point: Hardware.
The issue is that NOTHING can rival the computing power of the brain. Not even close. A cpu powerful enough would have to be the size of Ohio(where I live) and would need the same power that the whole countries uses daily to even turn on. It is far from off the horizon. Furthermore, we have to understand what it means to be human. We learn and grow as individuals. Experiments on rats show that stimulation makes our brains larger and more complex. MRI and PET scans also show in humans that people that know two languages have more a more complex temporal lobe(deals with words). Our brains physically grow and more connections are made. Hard drives have a finite amount of space and does not grow with information. We could just hook up more hard drives, but the AI could never grow like we do. This same limit applies to everything computer. The cpu can only go so fast and does not get better with practice.
Second point: software.
My biggest issue is that I find that most people do not know how computer think and are programmed. Things are done one line at a time. However, our brain is a filled with connecting cells that work together to reconstructed memories or think thorough a problem. The problem is learning. We learn as people and make connections from that and things we do not understand. We have AI that learns. There is a robot that walks, and if it missteps, it corrects itself and will learn not to fall again. We have made computers that learn how to be expert at games through losing and learning from that. Here is the problem...we have to program HOW it learns. We have to put in that falling is not what is wanted and that it should avoid it. We have to tell it that losing is bad. We all learn this from experience but this is from society. Machines do not conform to society. Connections are also not inherent. If I tell you to connect water and "U.S.S", one could connect USS with ships, and ships float on water. A computer could be programmed to make that connection but we would have to make a connection for everything which is not possible.
Third point being human is more complex than we think.
Lastly, I would like to ask what does it mean to be human? We have ambition, love, passion, and reasoning. I would argue that there is no true logic and everything is about perspective. Ambition and the idea of working hard are ideals of man. Love and passion for things are inherently illogical. I love to paint, but why do I do it? Because I like it. This AI could have dislikes and likes but how would it obtain it? Would we write them in? Along with a whole personality? This is not a self thinking or independent being, it is a imitation of what we can be. Lastly...purpose. What is it? Something that is different for everyone. We have many of them. Some short term, some long. How will this machine find purpose? It will be a method with nothing in it. It will be a shell of humanity. Purpose is such a human thing, something that cannot be programmed.
That was a bit long winded. I look forward to the opening statements of the pro side.
Why would it kill people? Why is that logical? Computers are pure logic, only using numbers to finish tasks. There is no reason why a computer would want to kill humanity. Is it scared? We had to program fear. Is it evil? We had to program that. We can try to program it all, then the result is not self-aware AI, it is just a program that spits output from a certain input without processing anything.
While nothing CAN rival the power of the human brain, that is what the internet would be for. We have the tech just not the ability. Besides just killing, viruses can occur is what I meant, i meant not whether we could but whether we should.
The whole, it takes years for us to learn to use our bodies thing I don't quite like because we aren't just learning during those years we are also, they have made working robots smaller than me, i am 5' tall. So if they made it taller and a little wider everything could fit. it just needs enough processing power to get the job done.
while this is kind of funny it shows how close we are to self aware AI
No votes have been placed for this debate.
You are not eligible to vote on this debate
This debate has been configured to only allow voters who meet the requirements set by the debaters. This debate either has an Elo score requirement or is to be voted on by a select panel of judges.