Termites have existed on this earth for countless millennium before we came and tried to kill them off. If an AI were to be created by a large group of scientists, with lets say the intelligence of a mouse, then the scientists, thinking logically, would mass produce and sell these creatures for a good price, and would become the richest people on the earth. No trouble with metallic mice, right? Wrong. According to Moores law, computing power should double every 18 months. Being that these new programmers are entirely digital and will work nonstop to improve themselves, we can assume that it would only take about a decade for AI civilizations to consider us termites compared to it's recently acquired godlike intelligence, which could be thousands of times greater than the combined human intelligence. Termites ancestors were the scientists who experimented with devolving the exoskeleton, creating fish, amphibians, reptiles, mammals, and us. We are now trying to kill the species that our ancestors wanted to thrive. Since AIs would be less lazy than humans when it comes to extermination, we would not be able to stick around underground to bug the AIs like the makers of the Matrix thought we could. Fortunately, there is a difference between us and the termites, and that is that we have philosophers and theologians. These people would immediately realize the risk of primitive AIs being mass-produced, and so although they would not be able to actually program them into existence, they would keep an eye on those that did. So if you can force the technicians to be absolutely careful in creating them, and can convince them to postpone mass-production until it is sure to be safe, the human species would remain the most intelligent and sophisticated on the earth. AIs could then be used for more productive things such as increasing our own intelligence or helping us to colonize the Galaxy in a safe fashion. We had better keep those scientists under control.
The creation of artificial intelligence is more than merely a matter of programming. There are ethical issues surrounding the idea of building a machine as intelligent as a human, and a philosopher or theologian could add to the conversation. These issues are just as important as the actual science of artificial intelligence.
Yes, philosophers and theologians should be involved in the creation of artificial intelligence, because there is always the issue of ethics. When we create new things, especially when it comes to medical technology, it is always important to have philosophers and theologians weigh in on whether what we are doing is ethical.
Philosophers and theologians might have interesting things to say about artificial intelligence, but run the risk of making it in the image of man. Ethical systems that have worked for us may not work for artificial intelligence. A machine cannot truly think and reason as we can, and for that reason, it's best if artificial intelligence doesn't have pretenses of humanity. Let it be a sophisticated program that can solve problems, and for that reason I'm going to look to the next Bill Gates for how to do that, not the next Pope.
It is possible to be both a scientist and a theologian (difficult but possible), or a scientist and a philosopher(less difficult). In general, though, theologians and philosophers aren't scientist and I simply don't think that they would have much to contribute to the creation of artificial intelligence. They should sit back and watch how real science is done, and be thankful of scientist.