The Instigator
clamorbox
Pro (for)
Losing
0 Points
The Contender
ObiWan
Con (against)
Winning
8 Points

Control ARTIFICIAL INTELLIGENCE PROGRESSION don't accept HUMAN EXTINCTION

Do you like this debate?NoYes+1
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 2 votes the winner is...
ObiWan
Voting Style: Open Point System: 7 Point
Started: 7/24/2013 Category: Politics
Updated: 4 years ago Status: Post Voting Period
Viewed: 1,759 times Debate No: 35962
Debate Rounds (3)
Comments (4)
Votes (2)

 

clamorbox

Pro

Scientists, human rights groups and even the UN are worried about the threat that AI poses to humanity in this AI explosion. I do not believe that we should accept the 'robo-sapien' as the next step in human evolution. Natural progression suggests change will come but survival instincts suggest we should put controls in an attempt to keep progress in human hands. I am not one to support a nanny state nor restrain scientific progress usually but this AI explosion will affect all our lives in the near future in devastating ways. I wrote this article because more people should be aware, discuss and debate these issues. What are your opinions on controlling progression?

http://clamorbox.blogspot.co.uk...
ObiWan

Con

Thank you to clamorbox for what promises to be an interesting debate.
As all my opponent has done in their first round is loosely outline their stance, I will do the same.

Granted there will always be reasons for scientific research to be monitored or if necessary controlled by an overseeing body. Some research may violate basic human or animal rights, such as the experiments rumoured to be conducted by the Nazi's, while there is also the risk that research such as that of the atomic bomb could be potential dangerous to the citizens living around it.

However, in the case of artificial intelligence research I do not believe such 'handholding' is necessary. While the implications of this reseach may be frightening to some, it promises to play such a significant role in the next chapter of humanities technological development that to have it controlled by a government and influenced for that governments own selfish purposes would be a slap in the face to hardworking scientists.

Finally, I cannot agree with my opponents outlandish claim that AI will come to represent the next stage of human evolution, although I look forward to her arguments on this matter.
Debate Round No. 1
clamorbox

Pro

Even if you don"t believe that AI will supersede us given the warnings from many scientists, academics UN spokespersons, human rights groups and millions being spent on projects and research like CSER"s (Cambridge uni) and MIRI ( originated from Berkley uni) that warn about risks and human extinction it shouldn't be left in the hands of companies interested in profits margins and trends. It shold be regulated not ignored because it isn't as advanced as us yet. I am sure many of the population cannot conceive or take seriously such progress. Much like those who could not conceive the ideas of space rockets (created in 1926) when the steam engine was created in 18th century. However, given that we are taking the first steps in AI technology it is strange to me that there was uproar when we cloned humans and animals yet here were creating a whole new species and you think we should leave it in the hands of companies. It"s not natural evolution but one created by man. (links associated with what I discuss are here http://clamorbox.blogspot.co.uk...).

The autonomous car is already here. Computers/AI are superseding us in specific ways e.g. using algorithms to generate and interpret data we cannot understand or explain. This is currently propelling science forward and proving we are fundamentally limited in comparison. DARPA are producing robots similar to the "robo-sapien" which could put man against robot on the field taking human decision making out of killing humans. Given our current technology, to suggest that we can stay in control and be the dominant species against robots in the future not taking into account risks, possibilities and probabilities is dangerous. They are already physically stronger than us and could have intelligence that is greater than ours. They are learning and thinking for themselves even if AI does not meet our intellect right now we are going in that direction. Its when or if robots have the intellect to create updates and programs to serve their own needs which may directly or indirectly be against the needs of humanity. This will be the critical point.

If South Korea proposed to produce 8,000 English-language teachers to kindergartens 3 years ago then they must have the confidence in its technological know-how and the investment to reach this goal. It is accepted by roboticists that our current technology will advance to produce primary care doctors within 50 years. If this is possible there will be devastating job losses across almost every sector in the near future. This means less distributed wealth and therefore smaller families, less availability to healthcare and resources etc. which in the end will not benefit humans.

As I have previously commented technological development has aided us in many positive ways. But with such risks governments regulated by people should be more open to debate issues and stop thinking of the short term economic benefits. Of course secrecy surrounding governments technological advancements has always been rife but even if at least public debate serves to inform the public about risks and concerns which in turn changes directions of companies meeting market demands it will be a positive step. If we do nothing now soon it may be too late. There seems to be more concern over psychological research that is required to meet ethical guidelines than there is on AI research proposing to change all aspects of human life where big ethical questions are ignored.
ObiWan

Con


I am still unsure as to what direction my opponent is taking their argument. Many of their stances appear to be hypocritical. In particular the argument based around robot doctors.



The pro claims that: "it is accepted by roboticists that our current technology will advance to produce primary care doctors within 50 years" and posits this as a negative by going on to say that because of this "there will be devastating job losses across almost every sector in the near future". However in the very same paragraph pro begins to argue that advances in AI technology will cause less availability to healthcare and resources!



If we are able to produce more doctors, without having to spend years training them, then how, in the future will there be less healthcare! The benefits of these hypothetical AI doctors are immense. They can be used in third world countries with drastic shortages of health care professionals, they can provide aid to soldiers on the battlefront without having to risk the lives of medics, skilled people trained in medicine, not warfare. We are even forecast to have a shortage of around 500,000 nurses by 2020! [1] These robots have the potential to save millions of lives and yet somehow, if you believe my opponent they 'will not benefit humans'.



My opponent has obviously also been watching/reading way too much science fiction. The speculation that AI robots will try to destroy us is absolutely that, speculation. And even if they do, as their creators we can ensure there are safeguards in place to prevent against it. Take Ridley Scott's Bladerunner for example. In it, a group of biological robots, known as replicants, return to earth and cause havoc in a dystopic vision of Los Angeles. However these replicants are build to only have a lifespan of around 3 years, so the last, and most dangerous of their group dies before he can cause any real threat to humanity.



Finally, I have a problem with my opposition’s claims that private research by companies interested in producing these AI robots for profit is a bad thing. A competitive marketplace is what drives companies to constantly produce better, more affordable products. With government regulations and restraints in place on this research it would stifle this competitive edge that drives companies to make sure they do things right. Government supervision also opens the entire process up to corruption, or just blatant favoritism.



My opponent is also worried about the dangers of 'not taking into account risks, possibilities and probabilities'. SCIENTISTS ALREADY DO THIS! Every single day, any experiment conducted has to pass safety guide lines, scientists applying for research grants must have their proposals reviewed by an ethics committee.



Sure artificial intelligence may be scary, but just think what it could do for humanity, or, as my opponent seems to think robots are their own separate species, with humanity. Sure it may be scary, but so was the first car, the first aeroplane. Irrational fear should not stand in the way of human progress, it should be the willingness to overcome this fear that drives us forward.



[1] http://healthcarehacks.com...


Debate Round No. 2
clamorbox

Pro

You seem to miss the whole point of AI : ROBOTS ARE CREATED TO LEARN AND DECIDE FOR THEMSELVES. They could re-write and compose programs for their own goals. IN ORDER FOR AI TO BE USEFUL learning for themselves is fundamental because they can adapt to any environment and the possibilities those settings present, WITHOUT THIS ASPECT THEY WOULD BE ALMOST USELESS. Your point of great positives about doctor robots learning quickly also has a downside. It contradicts your reference to a sci-fi film in which the robots are built to terminate within 3 years. If you have a robot formed of components it wouldn't take long before they figure out how to change components either. Therefore, CREATORS CANNOT ENSURE THAT SAFEGUARDS are in place in the long term. Given that they are being created with comparable limbs, skeletons and muscle movements to humans doesn't guarantee anything when they have the intelligence to take it out. AI WILL CREATE AMAZING POSITIVE THINGS BUT THERE WILL BE THE EXTREMELY GOOD AND THE EXTREMELY BAD and everything in between. They will learn from their environments and not every human nurtures the best in people never mind robots.

Perhaps I have not made my previous points clear for you. Yes IN THE SHORT-TERM TECHNOLOGICAL DEVELOPMENTS WILL AID US IN GREAT WAYS! THE POSSIBILITIES FOR ADVANCEMENT ESPECIALLY IN HEALTHCARE IS REMARKABLE! This is another reason why governments may be late to react because of the short-term economic benefits which are much needed right now. Rboticists are already building robots that include comparable human parts including kidneys and hearts etc this could have great benefits for humanity and I am not saying otherwise. YOU LOST MY POINT THOUGH ABOUT ROBOTIC DOCTORS. Doctors have to perceive medical problems and base diagnoses on what is and what is not said. They explore the possibilities about the causes to the symptoms displayed. They eradicate possibilities until the probabilities or diagnosis is accepted. TO REACH SUCH A HIGH LEVEL OF INTELLIGENCE IN JUST 50 YEARS MEANS THAT ALMOST EVERY HUMAN IN THE WORKPLACE COULD BE REPLACED. My point actually was: when jobs are lost on mass in every sector wealth distribution is affected. For the majority in the world that have to pay for private health care, where will the money come from when you have no job? Of course in the UK we have the NHS which is already struggling. The government is proceeding with tougher austerity measures because that"s the only way forward. In such times social security payments in unemployment benefits go up. When the risks are that jobs could be lost on mass, without controls and there are no policies to prevent it; the whole system could collapse.

You seem to argue that safeguards just happen magically. Ethical considerations and progress that has guidelines only happen when people are involved discussing, collaborating deliberating not by ignoring. I am sure at some point governments will decide what is right for their country but if people put in measures now instead of ignoring it having slow reactions it may become commonplace and there may be no way back. In the past recessions have had devastating consequences on individuals and populations. I"m not saying that this is "the solution" the following are merely suggestions and what I"m actually calling for is public discussion. If policies are put in place to safeguard a percentage of human jobs, companies would still save money or/and for every human job lost to a robot a percentage of tax could be paid into systems that benefit humans every year.

REGARDING YOUR POINT ABOUT COMPETITIVENESS BETWEEN COMPANIES I COULDN'T AGREE MORE. However, with your lack of acknowledgement about risks HOPING FOR THE BEST DOESN'T SEEM LIKE A STRATEGY TO ME. ONCE TECHNOLOGY IS KNOWN SOME WAY OR THE OTHER IT FAILS INTO THE WRONG HANDS (SIMILAR TO HOW IRAN"S NUCLEAR ENRICHMENT PROJECTS STARTED). Secondly, when cloned Dolly the sheep was created no one asked the public for their consensus. It just went ahead and opinions were then surplus to requirements since an ethics committee had considered it already. CLONING JUST SERVES AS AN EXAMPLE OF HOW THE GENERAL PUBLIC ARE NOT INCLUDED ON DISCUSSIONS ABOUT ADVANCEMENT. I'm not making comment on whether cloning is right or wrong I am merely saying the public should have input into what direction we are heading. Do you really think it should it be left for a select few to decide? A public referendum seems more suitable to me.

As for ensuring AI robots are beneficial/safe in every aspect of life in the future: a number of pilot studies are using robot teachers in high schools and kindergartens. Research looks at improving robots interaction in the classroom and delivering quality lessons. These have been expanded and the studies are growing until the robots are ready to roll out. South Korea hopes to have a robot in every home by 2020. What are the long term psychological impacts on children when robots are their role models and carers? Will it impact on their psychological development and interpersonal relationships in the near and distant future? Has any government said they will restrict robots until longitudinal studies are done? Even if empathy is programmed they will be like psychopaths caring for children I hope they will be as safe as you think they will be. I don't have the answers, nor do psychologists or roboticists but they are starting to roll out robot teachers despite this. The course of progress could be directed in certain ways but crossing your fingers and hoping individuals in the correct positions will do the right thing sounds too much like a fairy tale to me.
ObiWan

Con

Perhaps the reason you think I am missing your point is because you have failed to outline what it is you are actually arguing more, leaving myself and the audience to read between the lines. If you plan on conducting more debates here I thouroughly recommend you set out a concrete resolution at the start of the debate.

If all you want is simply public discussion on the topic then I guess by me accepting this debate you have already won, and it has indeed been an interesting discussion.

Anyways...

I believe I can summarise my opponents case into main points from the last round, where the human extinction angle was all but abandoned:

1. Unemployment
"My point actually was: when jobs are lost on mass in every sector wealth distribution is affected"

What my opponent fails to understand is that technology has been stealing people jobs since the forever and will continue to do forever. When you find a cheaper, more effective way to solve a given problem you take it. For example, and to keep on topic, the automative industry greatly cut the amount of workers it needed when car's were able to be assembled by robots, yet it hasn't led to a catastrophe. Once upon a time the invension of the windmill stole someones job, the invention of the mobile phone clossed down telegraph offices, the internet eradicated video rental shops, the electric light wiped out the lamplighter. The list goes on and on and on and if one day it includes teachers and doctors and statisticians and pilots and cameramen and accountants than so be it. BUT it won't be becasue some malevolent force has risen up to challeng humanity in an evolutionary duel to the death, it will be because that is the most effective way of doing this at the time.

And then new jobs will take their place. Did the 50 year old web designer, the ex-photographer turned touch up guru or the even the humble owner of an internet service provider ever dream of the jobs the have now when they were kids? I don't think so. But they are there now. We have no idea what positions will need to be filled in the future but as technology advances new jobs will ALWAYS emerge along with it.

(http://www.wired.com...)

2. Public Opinion
"I am merely saying the public should have input into what direction we are heading"

The cloning of Dolly the sheep was a scientific experiemt conducted by the Rosalin Institute in 1996. If every single scientific experiment was subjected to a tedious public referendum the negative press some issues receive would completely destroy them before they had a chance. Topics such as Vaccination, Stem Cell Research, Supercolliders, Cloning and of course AI would be voted down by the scientifically illiterate people that make up the vast majority of the population. Decisions should be made by the people who know what their talking about.

In the same way the crowd at a sporting match don't get to award fouls, or free kicks or penalties or whatever, no matter how much they like to think they know the rules better than the official, who has undergone extensive training to understand them, they will always be subjected to bias and but the interestests of themselves (or their own team) ahead of the interest of the rest of us, who just want to see a fair contest.

I am not saying the public should not hold debate, and the great thing about democracy is that if enough people don't like the way something is being done then we can change it (it's no coincidense that in most cases the most technologically advanced countries are democratic). But the initial decisions need to made by the people that know what they are talking about, the ones that are there to make them, otherwise misinformed and incorrect decisions result.

3. Safety

Here my opponent pretty much answers his own misgivings. He cites pilot studies (although not directly). This is our check and balance system. As most of Pro's arguments fail to take into account, we don't really know how things are going to turn out unless we test them, so test them we must. If the tests fail, we try again, if the suceed we proceed to the nexr round of testing, then the next, until finally we can say with absolute certainty that these conditions will yeild said results. We don't know that AI robots will become a safety risk unless we test it. We don't know that they will have negative phychological effects on children unless we test it. So by all means be afraid of this technology, but don't let the speculation bread by these fears dictate your thinking.

In conclusion, like the technology that was invented before it, and the technology that will be invented afterwards, artifical intelligence has the potential to greatly inhance himan life, and should be given the chance to develop in the same way as other technology, not uncheched but uninhibited. We already hold in our hands the technology needed to destroy the human race and so far so good. We cannot afford to let fear stand in the way of progress, and to all those doubters out there, never underestimate the adptive capabilities of a species as diverse and iwidespread as humanity.
Debate Round No. 3
4 comments have been posted on this debate. Showing 1 through 4 records.
Posted by clamorbox 4 years ago
clamorbox
Thanks not sure what happened there here is the link

http://clamorbox.blogspot.co.uk...
Posted by ObiWan 4 years ago
ObiWan
Alright sounds cool.
Your link is dead btw...
Posted by clamorbox 4 years ago
clamorbox
I am calling for transparency. We need the levels of information capable for supporting a reason discourse. As much information as possible should fall within the public domain at its earliest opportunity in order that we may make our own minds up about the kind of development being researched. The public needs the information so they can see Government regulations applied on moral, ethical and in our best interests. Large corporation will lobby government and as in so many other cases exert incredible influence upon decision making. Therefore, I want the government to regulate business/research and for the public to regulate the government. Given that many scientists have decided AI will supercede us or are warning of risks surely we should be looking at control and regulation now even though for centuries we have always tried to advance scientific progression.
Posted by ObiWan 4 years ago
ObiWan
I am confused as to what your actual resolution is... Are you arguing that government should control research into Artificial Intelligence? Or is it more a debate about if AI is the next star of human development? A specific resolution would be helpful and this seems like an interesting topic.
2 votes have been placed for this debate. Showing 1 through 2 records.
Vote Placed by Ragnar 4 years ago
Ragnar
clamorboxObiWanTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:-Vote Checkmark-1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:04 
Reasons for voting decision: Pro did not create a convincing case for why we should be opposed to the rise of the "robo-sapiens." Their case was made all the worse by at random times GOING ALL CAPS, which does convey emotion in words, but closer to yelling on the street than speaking with passion. The lack of clarity from pro was pointed out well by con with this statement "I am still unsure as to what direction my opponent is taking their argument. Many of their stances appear to be hypocritical. In particular the argument based around robot doctors."
Vote Placed by DeFool 4 years ago
DeFool
clamorboxObiWanTied
Agreed with before the debate:Vote Checkmark--0 points
Agreed with after the debate:Vote Checkmark--0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:-Vote Checkmark-1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:04 
Reasons for voting decision: PRO seems to be listing a number of hypothetical future technological advances, which PRO hopes will not happen. I had some difficulty determining exactly what these arguments were, however. CON points out the use of hyperbole, and sometimes vague argument structure by PRO, and was largely ignored. It seems that this was instigated by PRO as a means for launching a public discussion of this subject, and was ill-suited for the formal debate section. The forums section of this site might have been more helpful.