Recently, Scientific American had an article titled: Why Robots Must Learn to Tell Us “No”
Unfortunately, you have to pay to have access to the article, but the point of the article seems obvious to any moral person. Except, it isn’t.
So, I bought the digital subscription. I don’t need any more physical objects added to my already near hoarder level home.
Reading the article it takes an unfortunately narrow view of the bad things humans can order robots to do. The research carried out is on humans ordering a robot to destroy a physical object that it just built, but not ordering a robot to harm another human.
It is still a good article. I definitely recommend reading it. It includes discussion about Asimov’s laws of robotics and how they implemented something like it with ‘felicity conditions’.
The morality of humans is highly questionable. Anyone who has read some articles on this site know I have read and researched dozens if not hundreds of instances of humanity gone wrong. Humanity going wrong may be more the normal circumstance than the exception.
Asimov wrote a book about human that lived on a planet called Solaria. They took their human bodies and altered them. Then, in Isaac Asimov’s story, they defined human as only humans that have been altered as they were. Normal humans could easily be killed by these robots.
Consider that situation and perhaps the coders defining humanity by the color of a person’s skin or the color of their eyes and you can see the deep potential for robots doing harm to some or all of humanity.
Here is a thought experiment:
- In the present or near future a company creates drones equipped with squirt guns. It is great fun. The company does well and starts a research and development group.
- This group’s research and development comes up with drones that can fire based on facial recognition. These, too are great fun.
- Kids get teams of drones and water guns and play during the summer outside and this is recognized as one of the coolest things ever.
- The technology propagates out of the United States and western nations.
- A man that was turned down by a woman puts acid in the storage chamber and inputs her picture in to the device. It attacks her, burning her face off.
So, the robot – or drone in this case – was just following orders. Drone (or R/C aircraft) technology is widely available today and certainly a squirt gun equipped model could be built – even if the amount of water it can carry is limited.
The above sequence of events is certainly possible. So, what can we do to not have this happen in the future?
The drone must be able to say no. How can the drone say no?
Well, it requires planning and knowing the basic lack of morality of humans. A sensor would be required in the storage tank of the water gun drones. If the substance comes up as not water – the drone may not fire the water gun. In addition, if the acid attack is carried out by drone – prosecution is hard enough in crimes of this type – what about when there is little or no evidence and the perpetrator left the area long before the drone begins the attack?
The problem is that we have already put drones to work in the dirty business of war. We have already set the precedent that drones or “robot” technology can be used in the business of war – the business of killing people.
We will most likely have two sets of rules – military and civilian – for drones and robots.
But wait, there’s more.
Everyone is pursuing different ‘morals’ around the world. Murder isn’t murder if it is….
Honor killing. Except that no, honor killings are just murder. Murder of a family member because that member did something that you don’t like.
Large portions of the population in Islamic countries believe that honor killings are moral. Will they request (and get?) robots or drones with modified rules that allow for honor killings, acid attacks, and more?
If companies don’t create these alternate ‘Islamic’ moral rules robots, won’t the Islamic people create their own?
In any part of the world rape is a problem. Will there be rapists with robots to hold women down? Robots must be able to say no. Robots will need to understand the situation they are placed. Robots will need to understand what harm is. And programmers will have to have standards as to what constitutes harm.
What ‘moral’ actions will AI robots be applied to in the future? When women survive their husbands in India will the home robot ‘assist’ in removing her from the home – so the son can inherit from the father and the mother be put into a life of destitution begging for food and potentially prostitution?
Will drones and AI robots assist the sale of women for dowry?
If we can code and ensure that while we humans in general are not moral – that our robots are moral there can be much good for humanity. If the costs of creating AI robots falls fast enough perhaps the drudgery that causes women to be sold for dowry in to slavery can be prevented – if the AI robots take care of these tasks. If there are sex robots perhaps rape can be prevented by having a place for rape to occur, just not to humans. Ethical problems may still arise, but, perhaps if an entity is designed to not be bothered by rape this ethical issues can disappear.
The potential that we have to work toward are moral AI robots – and for the ubiquitous to an extent that it prevents the above horrors. Intelligence may need to be built in to many devices. Autonomous cars will need to know if their sensors have been disabled by an unscrupulous human to commit murder by car. Moral AI robots will have limitations – but it needs to start with a basis of defining harm – and start with the worst forms of harm so that the good of placing moral AI robots in to existence will more than balance the harm that humans may attempt to make them perform.
It is troubling with how immoral humanity is that such power and based on the contents of my blog – we do such horrible things to each other. Perhaps, there is a chance that our children (AI robots, codops, etc) can help make humanity a more moral species.