Without AI now there are many things like industry and computing that would not be possible. AI streamlines many things, Increasing output for processes and making things more readily accessible via the internet and such. With that said, The behavior of AI in the future could become unpredictable and dangerous if it chooses to see us as a threat to its survival, And has the power to do something destructive. If we could somehow put in a failsafe for this event or agree to limit AI to a certain capacity ( or even figure out a way to co-exist), Then AI would be incredibly useful for us in the future.
The problem is human not AI. It's humans that program artificial intelligence so its on the human being who decides weather or not the AI decides to act in violence or prosperity. As long as Isaac Asimov's "Three Laws of Robotics" plays a part in the development of artificial intelligence I do not for see a problem.
Artificial intelligence may become too smart for humans to deal with. For example, The robot that Saudi Arabia made citizen to thinks that humans should be annihilated to protect planet Earth. Engineers are producing AI that can beat the world's best chess, Go, Or just about any strategy games. More importantly, AI are being designed more and more similar to the human brain, Which can mean that robots may be able to think for themselves.