The Morality of Humans is Highly Questionable

Recently, Scientific American had an article titled: Why Robots Must Learn to Tell Us “No

Unfortunately, you have to pay to have access to the article, but the point of the article seems obvious to any moral person. Except, it isn’t.

So, I bought the digital subscription. I don’t need any more physical objects added to my already near hoarder level home.

Reading the article it takes an unfortunately narrow view of the bad things humans can order robots to do. The research carried out is on humans ordering a robot to destroy a physical object that it just built, but not ordering a robot to harm another human.

It is still a good article. I definitely recommend reading it. It includes discussion about Asimov’s laws of robotics and how they implemented something like it with ‘felicity conditions’.

The morality of humans is highly questionable. Anyone who has read some articles on this site know I have read and researched dozens if not hundreds of instances of humanity gone wrong. Humanity going wrong may be more the normal circumstance than the exception.

Asimov wrote a book about human that lived on a planet called Solaria. They took their human bodies and altered them. Then, in Isaac Asimov’s story, they defined human as only humans that have been altered as they were. Normal humans could easily be killed by these robots.

Consider that situation and perhaps the coders defining humanity by the color of a person’s skin or the color of their eyes and you can see the deep potential for robots doing harm to some or all of humanity.

Here is a thought experiment:

  1. In the present or near future a company creates drones equipped with squirt guns. It is great fun. The company does well and starts a research and development group.
  2. This group’s research and development comes up with drones that can fire based on facial recognition. These, too are great fun.
  3. Kids get teams of drones and water guns and play during the summer outside and this is recognized as one of the coolest things ever.
  4. The technology propagates out of the United States and western nations.
  5. A man that was turned down by a woman puts acid in the storage chamber and inputs her picture in to the device. It attacks her, burning her face off.

So, the robot – or drone in this case – was just following orders. Drone (or R/C aircraft) technology is widely available today and certainly a squirt gun equipped model could be built – even if the amount of water it can carry is limited.

The above sequence of events is certainly possible. So, what can we do to not have this happen in the future?

The drone must be able to say no. How can the drone say no?

Well, it requires planning and knowing the basic lack of morality of humans. A sensor would be required in the storage tank of the water gun drones. If the substance comes up as not water – the drone may not fire the water gun. In addition, if the acid attack is carried out by drone – prosecution is hard enough in crimes of this type – what about when there is little or no evidence and the perpetrator left the area long before the drone begins the attack?

The problem is that we have already put drones to work in the dirty business of war. We have already set the precedent that drones or “robot” technology can be used in the business of war – the business of killing people.

We will most likely have two sets of rules – military and civilian – for drones and robots.

But wait, there’s more.

Everyone is pursuing different ‘morals’ around the world. Murder isn’t murder if it is….

Honor killing. Except that no, honor killings are just murder. Murder of a family member because that member did something that you don’t like.

Large portions of the population in Islamic countries believe that honor killings are moral. Will they request (and get?) robots or drones with modified rules that allow for honor killings, acid attacks, and more?

If companies don’t create these alternate ‘Islamic’ moral rules robots, won’t the Islamic people create their own?

And More.

In any part of the world rape is a problem. Will there be rapists with robots to hold women down? Robots must be able to say no. Robots will need to understand the situation they are placed. Robots will need to understand what harm is. And programmers will have to have standards as to what constitutes harm.

What ‘moral’ actions will AI robots be applied to in the future? When women survive their husbands in India will the home robot ‘assist’ in removing her from the home – so the son can inherit from the father and the mother be put into a life of destitution begging for food and potentially prostitution?

Will drones and AI robots assist the sale of women for dowry?

But Maybe?

If we can code and ensure that while we humans in general are not moral – that our robots are moral there can be much good for humanity. If the costs of creating AI robots falls fast enough perhaps the drudgery that causes women to be sold for dowry in to slavery can be prevented – if the AI robots take care of these tasks. If there are sex robots perhaps rape can be prevented by having a place for rape to occur, just not to humans. Ethical problems may still arise, but, perhaps if an entity is designed to not be bothered by rape this ethical issues can disappear.

The potential that we have to work toward are moral AI robots – and for the ubiquitous to an extent that it prevents the above horrors. Intelligence may need to be built in to many devices. Autonomous cars will need to know if their sensors have been disabled by an unscrupulous human to commit murder by car. Moral AI robots will have limitations – but it needs to start with a basis of defining harm – and start with the worst forms of harm so that the good of placing moral AI robots in to existence will more than balance the harm that humans may attempt to make them perform.

However

It is troubling with how immoral humanity is that such power and based on the contents of my blog – we do such horrible things to each other. Perhaps, there is a chance that our children (AI robots, codops, etc) can help make humanity a more moral species.

Happy Christmas From Your AI Overlords

Happy Christmas!

Merry Christmas!

Happy Holidays!

Your AI overlords are here!

And they are coming for IT jobs, too.

There are hypothesis by people that creative jobs will survive the AI onslaught and so people should concentrate on these fields.  This is factually incorrect. Music, paintings, ability to determine a factual statement given human created evidence (Google’s AI Drawing Game) show that AI is not only able to be creative, but that it can be inferential on levels equivalent to humans.

However, being able to create art does not make an overload unless you are Hitler.

The first link is in regard to a company that is transforming the daily business activities by basically replacing middle management with an AI ratings system. In essence, it allows employees to rate and critique each other and stores and analyzes this data. This emphasizes the role of people as “cogs” in a machine or a clock. Like in the movie “The Incredibles“.

Bridgewater even reports that one-fifth of its hires cannot handle a year at the company, and those who do survive are often found crying in the bathrooms.

This statement in the article seems to indicate that crying in the bathroom of an employer is a result of AI management. I have seen management make people cry in the workplace – with no AI involvement at all. I have seen management bully people in to working excessive hours with no AI assistance.

I’m not sure I see a difference between a human manager making a person cry at work or an AI manager making a person cry at work.

As far as the creative arts are concerned – well, I think that would be an easy computer algorithm to “assist” artists to make better art. I’m envisioning a “Black Mirror” episode where people’s like rating directly impacted their daily life. Why not set up Facebook pages – and you’ll know if the next piece of art you create is better than the last – by the number of likes you get? Instagram and a little data analysis would work even better.

So, say hello to you AI overlords – they are already here – a bit earlier than expected.

Technology Progresses Even When You Are Not Watching

There was a time when I was avidly into building my own desktop computers. My oldest son and myself built his first desktop computer. It was always exciting to me and I saved a few bucks over say buying a Dell and gave the satisfaction of having built a device that does a huge number of tasks (computers, not just for the internets).

Shortly after we built my oldest son’s computer I stopped really paying attention to computer component parts. At some point I’ll build a computer with my daughters and my younger son – but I suspect those will be tablets with Raspberry Pi motherboards.

Today, my hard drive (1TB) is out of space! It is a bit unreal as we always come in to new high capacity hard drives with the attitude “well I’ll never fill that up” even though we know we said that the last time we had multi-gigabyte hard drives, gigabyte hard drives and look at that snazzy 200MB hard drive on that 286.

So, it is not time to buy a new desktop computer, yet. This one is plenty fast enough to crunch through hundreds of millions of records in my SQL Server database that it just doesn’t make sense.

So, I went to Amazon to find a new larger hard drive.

Sticker Shock! Despite me analyzing data and making predictions sometimes you take a step back for two or three years and find…. wow, 5TB hard drives for $125.

I’ll never fill that up.

It just brings to mind the things I have been predicting about Watson level computing in the home, codops (Computerized Doppelgangers) and when it will be achievable, and the idea that one day there may well be billions more codops on Earth than there are physical humans.

If we want to admit it or not – we are definitely in the part of the curve where advances are coming ever quickly and soon to enter the singularity.

Hopefully, I will live long enough to see it.

Hopefully, humanity doesn’t screw itself up before we get there. Whatever that ‘there’ might be.

Should We Create Codops

It is likely that we can simulate the human brain and then copy our minds in to computerized versions (codops). The question is: should we?

This is not an argument from the point of view of “Just because we can do something, doesn’t mean we should.” This is blind stupidity.

No. This is an argument that humanity is far short of being moral beings. Even the best of us. Even myself (far from it, I’m sure).

As evidence I would say many of the articles I have written are about humans being inhumane to each other. It seems like an oxymoron inhumane humans. What we really have to do, though is strike out the word inhumane. Everything humans do is by definition human or humane.

What is it to be human? Large swaths of our population abuse other segments of our society. Not only that, but they think it is the right thing to do. Spot a woman walking unattended by a male and you should rape her to teach her that she should not be out alone and to dishonor her.

Elevating one ‘race’ over another – which is now gaining dominance in US politics. This isn’t the exception, it is the rule. South Africa – with around 10% population as white – dominated the other 90% of the population. Because racism. Because white is better than black. Or so they say. Or so they say, ‘Hail Trump!’ during a conference.

So, what is it that people will do – as I earlier projected – that they will have Watson level computing capabilities in the home of the average family in 2037? What exactly are businesses going to do when they commonly have Watson capability computers in the work place – as I predict they will have in just 3 to 4 years?

What will businesses do when they have codops (computerized doppelgangers) in the work place? Will they run them until they don’t feel motivated to run anymore and then delete them and reload the original copy?

As we progress – what will happen when there are more people as codops then there are physical people in the world? How will we treat each other? Will we maintain contracts that state a codop has computing power to last the next year and when they run out of funds they will cease to exist?

Is that right? Is it moral?

It seems that we learn very quickly two sets of rules. One is moral and the other is what we can do and get away with. Hence there is a vast number of people that say, “Rape is wrong.” and there is a large number of people out there who rape. Or say things like “Racism is wrong” and vote for a candidate that clearly has the backing of outwardly racist organizations.

Here is a case in point. This person lived 55 years and was the father of four children. For whatever reason, he then decides to throw acid on all of his kids and his wife. It is like a nightmare sleeper agent from the cold war story. Similarly, you see people that are ‘responsible’ gun owners until one day – a former police officer – shoots and kills a man in a movie theater.

Perhaps we are all monsters hiding until the inappropriate time comes and then they horribly lash out at whoever attracts their ire.

Perhaps, all I am saying is that copying the human brain as a basis for an AI and copying minds of existing humans – might not turn out well. Safety protocols need to be developed. We are getting closer and closer to making an artificial brain.

Perhaps AI is not the only ones in need of the development of the three laws of robotics that Isaac Asimov developed. This recent article talks about creating ethically aligned AI – I find it interesting that we can develop ethically aligned AI, when we ourselves do not appear to be ethically aligned – or even agree what ethically aligned might mean.

Don’t Thank God, Thank AI

There is a lot of strife out there – when things go well medically, people in general like to thank god. When things go bad we always sue the doctor, the hospital, the insurance company or any company even remotely related to the procedures involved.

Now, people might try to sue IBM’s Watson, or Enlitic’s software for diagnosing lung cancer. They will, I suspect, be going after these pieces of AI software less often than they currently go after current malpractice lawsuits. Early detection is the best method for treating lung cancer – and if Enlitic’s software can detect it better than humans can – then more people have a chance at surviving lung cancer.

Maybe, just maybe, we’ll start thanking AI for saving our lives rather than god. At least AI might have more of a personal hand in saving your life. We don’t tend to thank tools for saving us – nor the operators of tools such as ultrasound devices and ultrasound techs. So, I won’t hold my breath, but I’ll be happy if AI or ultrasound tech saves my life.

In the future, it might also be a question if you should thank AI, if it attains intelligence to be treated as a sentient being.

Fake News and Rape

It is hard to understand the concept of fake news or the motivations. I’ve mentioned earlier in my research the real world is often crazier than what I would have imagined.

So, one of the Facebook sites I follow Taste My Metal World – which I follow because I like heavy metal music, likes to post different things – things that are ‘metal’ posted this article about a woman who was attempted to be raped, cut her rapists penis off, and then forced him to eat it.  I would hazard that the mean metal like in the show Metalocalypse where audiences would watch the groups shows and commit so many mass murders and suicides that the entire audience at the end would be dead.

I’m not sure a band with concerts like that would have long-term money making potential, but anyway back to the topic.

As you probably have figured out – this article about the woman cutting off her rapists penis and forcing him to eat it is false or fake news.

It also doesn’t make a lot of sense to make this kind of fake news. The idea that a woman who was about to be raped cutting her rapists penis off and forcing him to eat it – isn’t really “off the hook” or outside the parameters of normal human activity – if you are being raped, anyway.

Arguably, there are events in the real world that are just as metal.

Examples:

  1. A pregnant woman (by her rapist) in Turkey finally had enough and cut his head off. Then carried his head to a central area of town to show everyone.
  2. Or this man in India, who cut off his sister’s rapists head and then carried it to the police station. I have a vision of this. That not only is she carrying this man’s severed head, but she is holding it up high with one hand.
  3. This woman severed her brother-in-law’s / rapists penis and brought it to the police department.
  4. Or this event which is covered in the book “Half the sky” – where 200 women in a court attack a rapist, cut/rip his penis off and hack him to pieces.

I could go on… The point I’m trying to make is that if you want “metal” you don’t need fake headlines. The real world offers plenty.

 

Edited to add:

This guy, killing his wife’s rapist, cutting off his penis, cooking and eating it.

The Future of Everything

I am not an incredible prognosticator, but there are some simple conclusions that can be drawn from technologies being requested for prime time today. In terms of “The Future of Everything” – I mean the future of things.

Recently, the Department of Transportation in the USA is proposing a rule where vehicles and signs be able to transmit and receive messages from each other. This would give vehicles equipped with such technology a sixth sense.

As a motorcyclist I remember reading an article about several of the magazines writers riding out together equipped with communication devices. The lead biker was able to communicate road conditions to the rest of the bikers and prepared for what was ahead the rest of the bikers were able to travel faster than normal.

This is also a kind of sixth sense. (I know it is a misnomer).

It might take decades before all vehicles are equipped with this V2V technology, but as it progresses I think it will aid in the reduction of vehicle caused fatalities. This might combine in some ways with features of autonomous cars. I suspect there will be reductions in car insurance for people that get cars equipped with V2V as it becomes ubiquitous.

There are some potential problems with equipment like this – especially as it begins to be ubiquitous. While the article indicates that it will not track identities it relies on trust between the consumer, the businesses making the technology and the government – that this is a fact.

The same problem with EZ pass and never giving speeding tickets based on EZ pass information. Promises are made by governments and they test the waters by contravening those agreements to see if there is a public outcry. This is bad news.

Agreements between the people, businesses and government cannot be constantly tested if there is going to be trust between the groups.