Once we develop AI that is better than us, Then we simply have to utilize it. In my world, This would be done by training said new algorithm, With our own opinions and then peacefully leaving this world without having produced any biological offspring. These AIs could work in perfect harmony, Their systems without flaw, Carefully perfected.
Biological bodies, Are too fragile. While they may not carry on our thoughts, And ideas the way we wish, They are however we put it, Our legacy. This debate is honestly a bit cumbersome since it's just what we believe, In a nutshell that affects what we think. So to close, I think we should allow technological advancements to aid us in our journey towards fulfillment. Being able to co-exist is pretty good, But I wouldn't mind being taken over either. The best outcome I can think of is the previously mentioned harvesting of thoughts, And peacefully dying out.
I'm hoping skynet will take over and wipe humanity away. . . . Humans are the worst mistake god and or nature ever has made. . Perhaps the gods we create will correct this mistake, And we are a mistake, A horrible destructive pestilence that has ruined this beautiful plant, Nne of us are happy, Lets end this
Just as robotics would want to have Asimov's Laws (The Tree Laws), Similar laws could be made for AI. These rules would be like AI morals as they would act to them like morals do for people. Prevent them from doing things they find immoral. As a precaution, We should also prevent it from being able to control physical object. This way, Even if it does go rouge, It can't actually harm anyone. As a further fail-safe we could also have a physical button connected directly to it's power source so that if all else fails, We can pull the plug.
We would basically have a computer that can process vast amounts of data to predict thing in real time or even faster so it could theoretically predict things in the distant future with relative accuracy. The issue is, What if it falls into the wrong hands or what if the people that develop it do not take precautions. You may say this risk is too great so the answer should be "NO" but the simple truth is, As fast as technology has been growing, Such an AI is likely inevitable. If we don't develop it then someone else will leaving us at the disadvantage or even at their mercy.
AI have a lot of advantages.
It is really smart, Analytical, And reasonable.
Its intelligence is based on big data service.
Some scientist and specialists are scared of being inferior to AI.
And they stopped developing and adapting its technology in reality.
This situation can't be possible! Developing science, Technology and making them useful is a natural job for them. I think if they are scared of losing their job, They don't even have a right to work as scientist or whatever.
And I also insist that human is the only manager of AI. We made it, And we are the only one who can make them more improved. (nevertheless that AI is studying and developing their database by deep learning or machine learning)
I would appreciate if u give your idea of human and AI!
I definitely disagree with what @billsands said. Humans are NOT the worst creation God made even though we do some pretty stupid things. I do not thing that technology will take over because without human supervision, Robots can malfunction or something and a world full of robots, What even is the point of having a world if there's no one to enjoy life? If you're reading this comment, Do you wish to be dead? 'Cause it sure sounds like it! Be thankful you're living 'cause you could die tonight.
AI has some advantages. It is really smart and analytical. Its intelligence is based on big data service. However it cannot replace the human experience by unwittingly living our lives for us. The purpose of innovation is to preserve and enhance the human experience, Not to annihilate ourselves.
Humans are not the only managers of AI, Even now. We can't even control the big data services it's based on, Either individually or collectively! We can't control our online accounts, The way our data is used or abused by powers (civic, Private, OR belligerent). If AI transcends human ability, It has already escaped our control by definition. We might even lose control of ourselves as new ethical issues of "race" compound the ones society already struggles with without much semblance of God or moral guidance remaining. Trying to develop controllable AI that transcends human ability is like a toddler trying to construct a babysitter for herself while her parents go on vacation.
Finally, We are emphatically NOT in control of all our data, Despite assurances (and sometimes only revealed years later). I submit that AI would become unreliable and perhaps unstable just as a result of human error, Despite the edenistic vision that autistic tech CEOs attempt to pander to us and our indebted plutocratic governments.
We already know that AI has become an indispensable part of our lives. No matter how much we try to not imagine a situation in which we are no longer instructing the AI in our lives but it is instructing us, It is truly possible and it has started happening but we are simply ignoring it. We can't say for sure that once it does surpass our intelligence, It will still work under us. In our everyday life, AI does malfunction. And using and implementing AI everywhere means blatantly regarding the possibility that someday, It may not be able to control it and it starts working against us (for example, Demanding for rights and consequentially starting to dominating us in our everyday life because let it be clear, It has attained more intelligence than us). We don't need AI! Whilst we have already implemented AI in so many parts of our life, But this is where we need to stop. Human has unlimited powers. We can do our work (didn't we survive times when we had no AI? ). I am from India and I understand the importance of human capital because there are innumerable unemployed yet talented youth in my country. What if we give employment to them rather than the robots in our factories? This is how we may have a great future.
Whatever has happened till now, Let it be. Don't let it go too much further. Human extinction may be just one careless decision away.
If we make AI who are more stronger and smarter than us, Then we might be overpowered by making machines. There should be limits to making AI, Or should I say, Make a bomb that may destroy INCLUDING its creator. Not wise. We can make AI, But not create suicide for humans on its way.