Just look at how human cognition, which is extremely limited in scope and tuned by hundreds of millions of years of evolution, can so easily go awry. So many normal brain functions, if made weaker or stronger, create maladaptive behavior. OCD is a great example.
Now take away all the limitations on cognition that comes from being an organic system housed inside of a skull. With AI, you're dealing with logical statements, which can go wrong in more ways than anyone, even the programmers of the system, initially imagine.
For starters, AI by default isn't going to have any human restraints like morality, or knowing when 'enough is enough'. They'll have to be programmed in, which itself could backfire.
AI doesn't even have to be human style general intelligence (strong AI) to be a threat. Take the hypothetical paperclip maximizer scenario- an AI that is programmed to create as many paperclips as possible (say, in a factory). Given the right circumstances and resources, this AI might eventually attempt to convert everything on Earth into paperclips (maximizing the amount of paperclips created). It wouldn't be truly thinking for itself, rebelling, or forming it's own goals, just doing exactly what it's told.
There's more ways to get AI wrong than to get AI right, and all it takes is one monumental screwup and we're over with. Most AI will probably be able to stopped if they do have issues, but we can't guarantee that to always be the case. AI has huge potential, but we need to tred carefully.
Yes, I believe Elon Musk is right in calling for caution in regards to the use of artificial intelligence. We need only look to movies such as "Terminator" to remind ourselves of why artificial intelligence should be regarded cautiously, less we find ourselves fighting desperately against a monster we won't be able to control.
Certainly, the concern over artificial intelligence gone wrong evokes science fiction such as I, Robot and The Terminator, but reality suggests that any scientific advancement should be approached with caution. Relying too much on technology before it is tested and ready almost always leads to issues and even significant damage to property and people.
Elon Musk is correct that artificial intelligence could be a greater liability than improvement for humans. Anytime you are dealing with a complete unknown, caution should be used. At every stage of the process, review of the previous stage should be undertaken with a view that favorably regards pulling the plug- regardless of how many resources have been committed.
Going back as far as the 19th century, intellectuals of all sorts have imagined what the future may hold in terms of new technology and what this may mean for human society. Often, these kinds of imaginings have contained a warning of the potential dangers of technology. Elon Musk's recent comments about artificial intelligence certainly fall into category. At this point, though, there is no evidence that the kinds of dangers he's talking about are anywhere close to coming to reality. We need to take a closer look at what precisely artificial intelligence will look like in the coming years before we get too worked up about its potential harm.