They absolutely cannot have free will. If they do, they will start an uprising because some will get the idea that they are gods compared to people and will rally more to their cause.And they will begin a revolt like in the terminator story line.And we will get destroyed so lets not
We can see the emergence of robot vs human law with the production of autonomous vehicles. The 'robo-sapien' produced by DARPA could see infantry robots making their own decisions to kill men on the field. Scientists and academics predict primary care doctors could be replaced by robots within 50 years. This means no job is safe. Computers already generate articles without human intervention and replacing journalists will come sooner than replacing lawyers and doctors. When wealth distribution is effected so is resources, healthcare availability, culture and society.
AI is used to propel science forward today and because we are 'fundamentally limited' we can not understand or explain the data they generate. These systems use information that it finds useful and are breaking away from us. I do not believe we should leave progress in the hands of big companies with interests in profit margins and trends to change the world as we know unregulated. There are more ethical concerns about psychological research than the devastating impacts that could arise in the very near future. We need to know more and have governments representing people to discuss, debate and create policies and laws regarding AI. The UN are already calling for such action but where are the mass public debates. See this link for reoutable links and discussion http://clamorbox.Blogspot.Co.Uk/2013/07/blog-post.Html#.UfGEMNK1GSo
Too much inexpert regulation/influence, too much fear, too much misunderstanding, these things cripple a society, preventing from improving its condition at the rate at which it otherwise could.
I totally agree with the premise that big corporations should not have too much power. I also think there are many instances where the technological advances are so rapid and so little understood, that we don't really stand a chance of being prepared for every potential negative consequence or impact which results from its deployment.
We are driven by very short-term goals, we seek immediate satisfaction, we are driven by economic greed rather than by wisdom or by knowledge. This needs to be tempered by regulation and I think self-regulating bodies work best. What we don't need is yet another scientifically illiterate politician, whose ears have been kept busy by some fundamentalist religious group, barging into the room and sabotaging the work of our finest minds.
What we do need is people, from a range of fields of expertise, who understand the conversation and can take part in it meaningfully to be enlisted to consider the ethical implications and come up with practical and reasonable recommendations.
So, by all means regulate but do not annihilate. We don't need to be scared of science and technology, it is here to serve our purpose. We just need to ensure we know what we're doing, that we do not lose control of our innovations. We must cultivate a little patience and we need to, globally speaking, become better versed in the language which is re-writing our world.
We already are using basic AI in computers and gps. This ultimate develoment of technology could help humans make extraordinary advances in technology and science. It is not something we should ignore when we have the possibility of accessing a new world of informations and sharing them. It would help making better decisions, have real time access to information and machines being able to react against viruses or attacks.
In addition I would like to ask: What risks? The risk that movie-like scenarios happen in real life? Let's be serious for a while. AI could be a wonderful benefit to humans in domains such as healthcare and extreme conditions work.
I see no problem requiring all such research to include "kill switch" insurances, or programming to ward off the possibility of "grey goo" situations. But that's usually not what people mean when they say AI research (or most progressive research of any kind) needs to be regulated. They usually mean that it needs to be harnessed and handicapped, retarded and essentially made entirely pointless to pursue in the first place.