A little late running today. Took my oldest son to learn dirt bike riding. Sadly, that wasn’t as much fun as I thought it would be and any potential bonding with my son didnt’ happen and it seems the best I can hope for was that he didn’t do any damage to himself. Ditto for just about anything that I think might be fun to do with any of my 4 kids. It is frustrating beyond belief and lends itself to a feeling of lack of belonging with my own kids. I think it may be a long time before I try anything like that again.
Today, perhaps I would talk about the 5 police officers that were killed in recent protests about police killing black people.
This is really conceptually old news, though. Killing in the name of X. People that had nothing to do with the original incident dying. There is indeed probably a lot to be written about government, what it can and can not do. Also, a lot to write about what people who are disenfranchised by the system, what they can or can not do to fight back when the system fails to find substantial or satisfactory recourse.
My interest is in technology. My interest is in the potential that one day humans will be uploading their minds in a form of reproduction in the computers. The codops (Computerized Doppelgangers) are my interest. I believe they will exist perhaps more than the given evidence that they will exist in the future warrants. I think that there is much that is appealing in a future where there are billions of physical humans and trillions upon trillions of codops.
There is much that could go wrong with a world with trillions of codops as well. I have cautioned before about the precedents that are set today that may be difficult or impossible to reverse in the future and the world that these precedents will form.
I speak about the use of a robot to kill the sniper killing the police. Increasingly, in war we are using drones. Now, we have set the precedent – the police may use robots to kill civilians.
People might say, “Well, hold on a second, this is an extreme case.”
However, today’s extreme case becomes tomorrow’s every day course of action. I am talking specifically about the seizure of property and money from civilians by the police. In New York years ago, they said the seizures would only be in extreme cases, drug dealers, that if they were released on bail should not have the use of their cars, expensive homes, etc that they got as a benefit of being drug dealers.
Now, we have virtual Sheriffs of Nottingham all across the country, seizing people’s money when they are going to buy a car, etc, without charging the civilian driving the car, somehow charging the money with a crime and then (because there isn’t a conflict of interest here) sending that money to the police department to pay for incidentals that aren’t covered by the budget.
So, now, tell me how this is an extreme case? The use of robots to kill is wrong. It sets a precedent for when we create AI. When AI is in existence it will no doubt be paid for by the military and civilian police departments – and tasked with maintaining the peace – to the point of killing suspects that have not been charged and have not been convicted of anything.
There is surprisingly little information in the article about what kind of robot was used, if he count control its own movements, was it designed to carry a weapon, was it destroyed in the encounter, hell, make model and serial number of this artificial construct that was used to kill a person.
We should be clear about our violations of the three laws of robotics. These violations – particularly of the first law – are being done at the behest of humans. We provide all the justification in logic – for future robots to kill humans. Visions of a not-funny robot Santa Claus from Futurama come to mind.
In previous articles I have discussed over and over again the inhumanity that humans practice against other humans. One day there will be other intelligences – either robotic, AI, or codops. They will have a native ability to communicate with electronic devices. We are consistently automating the devices used to kill with drones, ground robots, sensors wired to guns to kill intruders – what is the logical conclusion of such behavior?
In the end we choose how to apply technology. We (humanity) choose how to spend our money. Instead of spending our money and increasing our technology on items that improve the human condition we consistently spend it on things explicitly made to end the human condition by killing humans.
On the side of agreement of the actions taken by the Dallas Police Department:
“This is an individual that killed five police officers,” he added. “So God bless ’em.”
appears in the second article in this posting.
However, the concern is that while in this case we may have a high confidence that the person killed by the robot was the aggressor and killer of 5 police officers, what about the next time such methods are employed? In fact, even in this case, killing (with the express intent to kill, not an intent to defuse the situation and apprehend) is murky. We don’t know what crimes this individual would have been convicted (or even charged with) if they had been arrested, processed through the courts, and what the sentencing would have been. We don’t know anything about who this person was, what was their motivations, and what public opinion and the opinion of the jury would have been when going through the legal process.
What we do today – and what we allow – has an impact on the future. I’m reasonably sure that the precedents set by this activity have a far longer range and will have a larger impact than the already horrible and consistent killing of black people by members of the police department.
After all, if we start killing our own civilians with drones, land robots and other remote devices, who is to say if a computer program, AI, person (and which person) was controlling that device at the time it killed someone? Who is to say they had sufficient cause to kill that person? What is the check to this ultimate power of killing people and not getting any blood on your hands?