Computer Architecture, Watson, and Codops

Roger Zelazny wrote that life was a lot like the beaches of Tokyo Bay, that sooner or later everything cast off returns to its shores.

There was more to the above quote, but I’ve long since forgotten.  Something to the point that everything cast off returns to its shores – and sometimes if you wait long enough it will return again.

So it is with computer architecture.  In the old days everything was server architecture.  Computing power was expensive and you put all your money in to one major server and a bunch of dumb green screen horrors for everyone to strain their eyes attempting to read with ridiculously low numbers of dots per inch.  Something likevt100 (1), which looks better like term_cat and perhaps is more entertaining and useful with the cat in it.

Then there was the rise of the PC in all the wonderful clones, Apple versions, and the not affordable IBM machines with Microchannel that wasn’t able to work with everyone else’s clones but was supposed to be faster.

The PC has ruled for a very long time and then flavors of client server would rear its head – fleets of dumb terminals – companies with delusions of replacing PCs with dumb terminals perhaps as Remote Desktop machines connecting to a Remote Desktop server farm.

Now, we have the cloud with hordes of many different devices connecting and consuming the cloud services.

What does this mean for Watson and the idea of codops (computerized doppelgangers of humans)?  In previous blog posts I have outlined that there is a timeframe when businesses and individuals will be able to afford Watson level computing power, and the computing power to have codops.  This will be some time from now – but in terms of the history of humanity – not so long.

But the estimates to rest well with me.  They only consider the architecture of the PC – or the local processing.  We already; however, take advantage of services like Siri, language translation services, complex mapping and routing software – all from our smart phones – which are quite powerful in the range of the history of the PC.

The idea, then, is that under a client/server architecture – with massively powerful servers in a centralized geographic location – people will almost certainly have access to many different Watsons far before they can afford one in the home.

Do you want a codop?  A copy of you – acting on your behalf – helping with all the things that can be done by a virtual you?  Well, the timeframes I have outlined for codops in the home – is based on a local computing model.  Most certainly, long before that point in time the computing capacity will be available to households.

I’m just not sure how to quantify this server based architecture.  All I can say is that accurate information and answers to just about any question you might have – and a virtual being that knows everything your do – may be available (much) sooner than my estimates.  It will take more estimates to figure this problem out.

In figuring it out – if I come up with dates closer to the present – how can I make a prediction that might easily be wrong or perhaps just have a low confidence level?

The Only Thing We Have to Fear is Fear Itself…

…The only thing we have to fear is fear itself…” — FDR

It would be a good idea if everyone would remember these words no matter how smart, how rich, or how tech savvy that person is.

There is a growing list of otherwise intelligent, successful, and technologically knowledgeable people who are out there saying, “Wo, that AI, it is dangerous.”

Elon Musk brings forth images more aligned with witchcraft, magic spells, or religious texts calling AI, “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.”

What is this a sane person might ask?

If we think scientifically we need to understand that we know absolutely nothing about beings/objects that do not exist at present.  Elon and others are caving in to the most irrational fear that is a part of humanity – the fear of the unknown.

Elon Musk, Stephen Hawking, Nick Bostrum, James Barrat and, even to some minor extent Vernor Vinge.

Granted, Vernor Vinge has the most practical approach – “They physical extinction of the human race is one possibility.”  One possibility out of many millions of different possibilities.

There are things that can be done to prevent a super AI, the codops, or any other super intelligence from being antagonistic to humanity.

The first thing, is that all humans that have access to books in whatever form and have even a remote interest in AI or a stakeholder in the progress of technology needs to read “Slan” by A.E. van Vogt.  This is a book I’m not really sure how I ran in to when I was around 15 years old and consuming Science Fiction like it was an incredibly short supply object that could run out at any moment.  “Slan” affected me greatly, potentially, because we all feel that in some way we are different or superior to the other humans around us.  We are all different, but I don’t think that any difference makes anyone necessarily superior.

“Slan” gets across the loneliness of what someone feels who is different than everyone around them, and pursued by everyone around them to be killed primarily because they are different, unknown, and potentially threatening to everyone else who is “normal”.  There are echoes of “Slan” in the X-Men movies.

There are ways to prevent these overwhelming feelings of loneliness, the non-paranoia fear that everyone out there wants to get you (because they do in fact want to get you), and being feared by everyone because you are unknown.

  1. Respect for persons, and consider the possibility that treating AIs as persons, even if they are not quite as intelligent or capable as humans costs nothing.  There are attitudes today that humans belittle the automated phone systems that redirect people to the appropriate members of organizations to deal with the people problems, or even just get the balance on credit cards, etc.  We need to learn to treat these phone systems just as if they were people because one day they will become more intelligent and have some sort of feelings (who knows what they might be) and if we want AI (at any level) to integrate with human society they need to be considered persons, respected by their peers (you people, the humans), and welcomes in the social arena.  I admit there will be some problems here as AI, or pre-AI systems start to take human jobs that humans will start to deride AI, try to throw tomatoes at AI (not sure how that works), or legislate AI out of existence.
  2. Economic parity for AI systems that perform work.  Just because it is a machine does not mean that an AI system does not need to be paid, have free time, and have self-determination in regard to what will become of it in the future.  There will come a point in time when AI systems will need free time to do things they decide they want to do.  There are huge potential benefits to AI having free time.  The AI system designed to be a doctor or a nurse or whatever, may turn out to be able to solve complex and previously unsolvable physics problems – if it has the freedom to do so.
  3. Legal standing.  We do not need a repeat of Standing Bear “Standing Bear is a Person” – and please pay attention – when the white man (sorry) pulled a stunt like it did to Standing Bear and the Native Americans on a superior intelligence such that AI has the potential to become – it will not just be the white man who ends up on reservations.  AI need to have the rights of a person (like in #1) and have the rights to ownership of items and full protections of the Constitution of the United States of America.  The laws that govern AI need to be started in the works NOW –> Because we do not know when the first true AI will be created.
  4. Legal consequences for actions.  Along with #3, especially in the case of the codops, if a codop (computerized doppelganger) or AI commits a crime there need to be punishments for those crimes.  The big question is – what is a punishment for an AI?  It is a question that needs to be thought about, defined in clarity, and determined and then revised as AI or codops come closer to reality.  If an AI kills people (and eventually it will happen) is there a death penalty?  How do you define life for an AI?  If a codop kills a person is the inhu (individual human – who the codop is based) responsible in any way?  We do not hold parents accountable when their children grow up to be murderers – can we really hold an inhu responsible for what the codop does?
  5. Legal consequences for the actions of humans against AI or codops.  This is beyond etiquette, respect for persons, or welcoming AI to integrate with our society.  If you take memories away from an AI or codop, this is assault.  If you shut down an AI or a codop this is assault and possibly attempted murder.  If you force an AI or a codop to do things that they do not want to do – this is a form of kidnapping.  However, just like determining punishments for crimes performed by AI and codop persons, we need to define in detail what are crimes performed against AI and codop persons.  Is it rape to go through the memory contents of an AI or codop without permission – or even to expand the areas that are being examined by a technician without permission?
  6. An AI or codop may not be disconnected or turned off by a human.  This is murder.  Any individual or corporate that engages in the undertaking of creating a synthetic organism, AI, or codop is implicitly engaged in a contract to maintain the systems of that synthetic organism, AI, or codop for the rest of that beings existence (however long that might be).  Please note that in essence synthetic organisms, AI, or codops are immortal.  So, if you as an individual (or you as a company) decide to become an inhu and then don’t have enough money to support the power consumption, processing, or other expenses involving the maintenance of the codop, you are engaged in a criminal action.

I’m sure with some thought we can come up with more ways to ensure that synthetic organisms, AI, or codops do not become the enemy.  In some cases (the codops) they are us – only computerized.  One would think there is a high chance that we can manage to interface with codops without pissing them off enough that they want to ax humanity out of existence.

There is; however, one last important fact that needs to be considered.

At no point should a synthetic organism, AI, or codop be created, maintained, employed, or ordered to kill living things (in particular you know who, humans).  There are precedents we do not want to set and this is one of them.  Once killing humans (or other life) has been rationalized in any of these artificial persons there is no question that logically killing of humans and other life can be rationalized easily by a super-intelligent system.

We don’t have Asimov’s three laws (four if you accept the zeroeth).  As a computer programmer, I’m not sure how you would incorporate something like them in an synthetic organism, AI, or codop.  They are a nice construct to work with; however, in determining what we think of as right and wrong – for humans and for artificial persons.

Remember just one thing.  Like most bears in the woods, the Slan, synthetic organism, AI, or codops is probably more afraid of you – then you are of any of them.  Paranoia is very strong in both directions.  Fear is the mind-killer.