The Only Thing We Have to Fear is Fear Itself…

…The only thing we have to fear is fear itself…” — FDR

It would be a good idea if everyone would remember these words no matter how smart, how rich, or how tech savvy that person is.

There is a growing list of otherwise intelligent, successful, and technologically knowledgeable people who are out there saying, “Wo, that AI, it is dangerous.”

Elon Musk brings forth images more aligned with witchcraft, magic spells, or religious texts calling AI, “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.”

What is this a sane person might ask?

If we think scientifically we need to understand that we know absolutely nothing about beings/objects that do not exist at present.  Elon and others are caving in to the most irrational fear that is a part of humanity – the fear of the unknown.

Elon Musk, Stephen Hawking, Nick Bostrum, James Barrat and, even to some minor extent Vernor Vinge.

Granted, Vernor Vinge has the most practical approach – “They physical extinction of the human race is one possibility.”  One possibility out of many millions of different possibilities.

There are things that can be done to prevent a super AI, the codops, or any other super intelligence from being antagonistic to humanity.

The first thing, is that all humans that have access to books in whatever form and have even a remote interest in AI or a stakeholder in the progress of technology needs to read “Slan” by A.E. van Vogt.  This is a book I’m not really sure how I ran in to when I was around 15 years old and consuming Science Fiction like it was an incredibly short supply object that could run out at any moment.  “Slan” affected me greatly, potentially, because we all feel that in some way we are different or superior to the other humans around us.  We are all different, but I don’t think that any difference makes anyone necessarily superior.

“Slan” gets across the loneliness of what someone feels who is different than everyone around them, and pursued by everyone around them to be killed primarily because they are different, unknown, and potentially threatening to everyone else who is “normal”.  There are echoes of “Slan” in the X-Men movies.

There are ways to prevent these overwhelming feelings of loneliness, the non-paranoia fear that everyone out there wants to get you (because they do in fact want to get you), and being feared by everyone because you are unknown.

  1. Respect for persons, and consider the possibility that treating AIs as persons, even if they are not quite as intelligent or capable as humans costs nothing.  There are attitudes today that humans belittle the automated phone systems that redirect people to the appropriate members of organizations to deal with the people problems, or even just get the balance on credit cards, etc.  We need to learn to treat these phone systems just as if they were people because one day they will become more intelligent and have some sort of feelings (who knows what they might be) and if we want AI (at any level) to integrate with human society they need to be considered persons, respected by their peers (you people, the humans), and welcomes in the social arena.  I admit there will be some problems here as AI, or pre-AI systems start to take human jobs that humans will start to deride AI, try to throw tomatoes at AI (not sure how that works), or legislate AI out of existence.
  2. Economic parity for AI systems that perform work.  Just because it is a machine does not mean that an AI system does not need to be paid, have free time, and have self-determination in regard to what will become of it in the future.  There will come a point in time when AI systems will need free time to do things they decide they want to do.  There are huge potential benefits to AI having free time.  The AI system designed to be a doctor or a nurse or whatever, may turn out to be able to solve complex and previously unsolvable physics problems – if it has the freedom to do so.
  3. Legal standing.  We do not need a repeat of Standing Bear “Standing Bear is a Person” – and please pay attention – when the white man (sorry) pulled a stunt like it did to Standing Bear and the Native Americans on a superior intelligence such that AI has the potential to become – it will not just be the white man who ends up on reservations.  AI need to have the rights of a person (like in #1) and have the rights to ownership of items and full protections of the Constitution of the United States of America.  The laws that govern AI need to be started in the works NOW –> Because we do not know when the first true AI will be created.
  4. Legal consequences for actions.  Along with #3, especially in the case of the codops, if a codop (computerized doppelganger) or AI commits a crime there need to be punishments for those crimes.  The big question is – what is a punishment for an AI?  It is a question that needs to be thought about, defined in clarity, and determined and then revised as AI or codops come closer to reality.  If an AI kills people (and eventually it will happen) is there a death penalty?  How do you define life for an AI?  If a codop kills a person is the inhu (individual human – who the codop is based) responsible in any way?  We do not hold parents accountable when their children grow up to be murderers – can we really hold an inhu responsible for what the codop does?
  5. Legal consequences for the actions of humans against AI or codops.  This is beyond etiquette, respect for persons, or welcoming AI to integrate with our society.  If you take memories away from an AI or codop, this is assault.  If you shut down an AI or a codop this is assault and possibly attempted murder.  If you force an AI or a codop to do things that they do not want to do – this is a form of kidnapping.  However, just like determining punishments for crimes performed by AI and codop persons, we need to define in detail what are crimes performed against AI and codop persons.  Is it rape to go through the memory contents of an AI or codop without permission – or even to expand the areas that are being examined by a technician without permission?
  6. An AI or codop may not be disconnected or turned off by a human.  This is murder.  Any individual or corporate that engages in the undertaking of creating a synthetic organism, AI, or codop is implicitly engaged in a contract to maintain the systems of that synthetic organism, AI, or codop for the rest of that beings existence (however long that might be).  Please note that in essence synthetic organisms, AI, or codops are immortal.  So, if you as an individual (or you as a company) decide to become an inhu and then don’t have enough money to support the power consumption, processing, or other expenses involving the maintenance of the codop, you are engaged in a criminal action.

I’m sure with some thought we can come up with more ways to ensure that synthetic organisms, AI, or codops do not become the enemy.  In some cases (the codops) they are us – only computerized.  One would think there is a high chance that we can manage to interface with codops without pissing them off enough that they want to ax humanity out of existence.

There is; however, one last important fact that needs to be considered.

At no point should a synthetic organism, AI, or codop be created, maintained, employed, or ordered to kill living things (in particular you know who, humans).  There are precedents we do not want to set and this is one of them.  Once killing humans (or other life) has been rationalized in any of these artificial persons there is no question that logically killing of humans and other life can be rationalized easily by a super-intelligent system.

We don’t have Asimov’s three laws (four if you accept the zeroeth).  As a computer programmer, I’m not sure how you would incorporate something like them in an synthetic organism, AI, or codop.  They are a nice construct to work with; however, in determining what we think of as right and wrong – for humans and for artificial persons.

Remember just one thing.  Like most bears in the woods, the Slan, synthetic organism, AI, or codops is probably more afraid of you – then you are of any of them.  Paranoia is very strong in both directions.  Fear is the mind-killer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s