This article reminds me about the arguments against AI being the method to kill people.
It is a good article; however, it misses a major point.
If we design AI human killers we set the precedent. The question for AI is not “Is it ethical to kill humans?” it is “Do we kill these humans or not?”
The first question is a major hurdle for humans to kill other humans and the vast majority of the time we choose not to kill each other.
However, if we were all programmed that it is ok under certain circumstances to kill other humans our choices would be different. In addition, as in Isaac Asimov’s robots series – who controls the definition of human? Could AI be programmed to kill other robots and then have the definition of human manipulated to include only certain groups of humans?
This is something that is brought out in X-Men: Days of Future Past. Where robots with what appears to be primitive AI analyze humans for certain genetic traits and kill them. What would prevent someone from altering these robots with primitive AI from say – eliminating people with genetic diseases or people with dylsexia or by racial phenotype or even more nefarious (as indicated in “X-Men: Days of Future Past” as well) recessive DNA with the potential to be a racial phenotype.