• Yes, because computers and robots may come to be aware of morality.

    As computers and robots increase in sophistication and understanding of the world, they may begin to make their own moral decisions. Already artificial intelligence have begun to think for themselves and make their own judgments. Currently this is limited to things like simple categorization of the internet's databases, but as the ability to think critically increases, computers may eventually develop personalities.

  • Computers Are Tools, Not Perpetrators

    It is true that a computer or robot can be designed or directed to act immorally. However, I do not believe a computer or robot can be considered culpable for such immoral actions.

    It is people who are ultimately responsible for immoral acts; not the tools they create or use. Computers and robots are tools. A person is more than able to build, design, program, or direct a computer or robot to do an immoral act. Once set in motion, the computer or robot will do exactly what it was designed or instructed to do.

    Even in the case of a learning system, the information it was given as a baseline for its training defines its actions. It might be argued that a learning system might autonomously commit immoral acts without human direction. However, this was only because a human built the system to allow such actions to be in the systems capabilities. In the end, I do not see the computer or robot as the actual perpetrator (at least not in the foreseeable future). It is simply an extension of the people behind it. I see the humans that build, design, program, or use these systems as being ultimately responsible for any immoral behavior they may do.

Leave a comment...
(Maximum 900 words)
No comments yet.