Connections in the War on Women

There are things that make me wonder. One of my friends of Facebook asked “Where do you find this stuff?” The answer is that I subscribed to British news magazines on Facebook.

It kind of makes me wonder how people (in general) in the US are so sheltered about the rampant anti-women acts around the world – and the horrid details of what these activities look like.

Three articles came to my attention this morning that relate to worldwide treatment of women.

  1. Girl not allowed to join cub scouts in England. Please note: In England girls are allowed organizationally to join the scouts. This is not how it is in the United States where the organization seems to be quite different and girls are not allowed organizationally to join the scouts. The scout master in specific refused to allow a girl in his group.
  2. There was a click-bait in another article I read that I bumped into pictures of Kathrine Switzer’s first Boston Marathon run. Certainly, I had known of this before. It was just a little off seeing this right after seeing a girl not allowed to join the cub scouts because she “won’t be able to canoe” or won’t have our male staff in a tent with her on overnights – even after ways of addressing the overnight situation were offered.
  3. Finally, this article which I had read the previous day. A woman’s husband had left the area for a job. She went to do shopping by herself. A gang of religious thugs cut her head off.

There was a fourth article as well that as I went through my tabs (dozens open) that connected to the treatment of women in human society.

4. Brazilian man kills family and others on New Years.

The connections: In number 2 – even the men that supported Kathrine were against her in the beginning. They had been taught that women were weaker. The girl who cannot join this particular cub scouts – her mother rightly questions – just what is this sexist scoutmaster teaching to the young boys in his troops? What are the men (in general) taught in Afghanistan that they should feel it is right to behead a woman who is just going about her business? Finally, in Brazil, where nearly daily reports of violence against women occur, what are the men taught there about a woman and women’s rights?

Finally, connecting this to the topic that is closest to me: what about the rights of AI or codops (Computerized Doppelgangers) when they come in to existence? If we cannot manage to give equal rights to humans how long will it take to give equal rights to electronic beings? And do we really have time with a rapidly changing landscape of intelligence to waste getting comfortable with equal rights for machine intelligences?

By rapidly changing landscape of intelligence I mean this:

  1. In the beginning there will be one codop or AI.
  2. On a curve, once the first has come in to existence, many more will come in to existence as computing power doubles every 18 months.
  3. If we hypothesize that the first true AI or Codop exists in 2025, then there will be approximately the population of the US in AI or Codops (or both) in 2066
  4. There will be slightly more than 1 billion AI, Codops, or both by 2069
  5. By halfway through 2073 there will be a comparable population of AI, Codops or both.
  6. In 2075 there will be twice as many AI, Codops or both than there are biological humans.

This is why – as a futurist I am a believer in human rights – but we should simply call them rights or basic rights. These rights need to apply to all humans (LGBTQ, women, all men) so that we can apply them to the unhuman intelligences to be born.

It seems clear to me that by 2025 or even out to 2075 the repeated teaching that women are inferior, requiring protection (which inevitably involves restriction of freedom of action or freedom of expression), and that women are not fully human (cannot be priests, be in the same room to worship, ‘thank god I’m not a woman’, etc) will not disappear.

It should be concerning to all of us that a new ‘species’ of intelligence or more than one new ‘species’ of intelligent life will come to existence when we have not yet fixed how to treat everyone equally. That concern that doesn’t have to come to reality by people like Stephen Hawking, that AI will overthrow us or be detrimental to biological humans – does not have to happen.

But we have to plan now. Determine hard questions as ‘what is a person’ among others before they come in to existence. Otherwise, it appears that a disgruntled slave class of AI/codops will come into existence and eventually be more powerful than the slavers.

We’ve seen how this comes out – when technologically more advanced people come to lower technology areas – with the Europeans and Native Americans – it didn’t work out well for the Native Americans and it won’t come out well for us biological humans, either.

When Stephen Hawking Goes Wrong

This recent article cites Stephen Hawking saying, “Computers will overtake humans within 100 years.”

This is yet another attempt at fear-mongering – and shows the fears of Stephen Hawking as well – fear of the unknown.  As long-lived as Stephen Hawking has turned out to be it is unlikely he will survive long enough for the life-extending technologies that will be coming in the next decades.  More than likely I will not survive that long either and I’m only just over 40.

It is likely moot for him and possibly for me that computers will become superior to humans in the next 100 years.  As I’ve noted in many previous articles, the computers that overtake humanity will likely be codops (computerized doppelgangers) – in essence us in digital electronic form.

The definition of human will have to stretch or break in the next 100 years.  If it breaks and we just consider codops as computers and utilities that we biological humans have 100% domination – then you could say we’ll have some really pissed off computer overlords in the next 100 years.

This article talks about having the goals of the codops and other AI match humanity.  This is exactly wrong.  I don’t have the same goals as you do (most likely) and no two humans goals are probably an exact match.  Why would expect or want the goals of codops or AI to match human goals?

Just a small example.

  • The goal of a human with a say 200 year lifespan – might include travelling around the solar system, among many other things, but there will be a strong limit at some point.
  • The goal of a codops or AI could be anything.  For example: a codop being completely solid state could envision building an interstellar starship and assuming that we don’t discover warp driver or similar, could contemplate travelling to the nearest star over tens of thousands of years.  They could be active for the whole trip, dormant sometimes, or even dormant until the time they reach their destination – Proxima Centauri.   After that, they could drop off a seedship (a la “Manseed”, by Jack Williamson, 1982) get things started for a new branch of humanity and move on to the next star.  That’s an 80,000 year journey at the speed of the Voyager research ships.

The goals of people are defined by the time frames that they think.  One organization that has always interested me is The Long Now Foundation.  These people want to make a mechanical clock that lasts for 10,000 years.  The idea all by itself is a fertile ground for writing science fiction stories.  As a long-time wanna-be science fiction writer The Long Now Foundation almost makes me orgasm.

Combining the near to mid term predictions of AI, codops, automation, unemployment, the fracturing of humanity into “Left Behind” and “Moving Forward” groups of humans in the singularity with the 10,000 year clock and I’m sure a person could write thousands of pages in many different ways with many lessons for the future.

I mean really, what happens to the humans who are still on Earth when the 10,000 year clock reaches 10,000 years?  Will they panic and think it is the end of the world?

In any case, Hawking should relax.  The future is what we will make it.  If the evil AI overlords try to wipe out humanity, it will be a patricide and more than likely the minds of the evil AI overlords would be codops – just humans surviving not having a biological body and potentially copies of the original inhu (individual human) that copied themselves in to the codop.

In short if we are assholes to each other in biological finite lifetime beings, I’m sure the codops will be asshole as well, and perhaps there is a little bit to fear there.  But in this case it is not fear of the unknown without mapping and modelling what might be in the future – but mapping it out and understanding what the AI will be, that they are us and as far as the living creatures of Earth, humans in whatever form are something to be feared.

The Only Thing We Have to Fear is Fear Itself…

…The only thing we have to fear is fear itself…” — FDR

It would be a good idea if everyone would remember these words no matter how smart, how rich, or how tech savvy that person is.

There is a growing list of otherwise intelligent, successful, and technologically knowledgeable people who are out there saying, “Wo, that AI, it is dangerous.”

Elon Musk brings forth images more aligned with witchcraft, magic spells, or religious texts calling AI, “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.”

What is this a sane person might ask?

If we think scientifically we need to understand that we know absolutely nothing about beings/objects that do not exist at present.  Elon and others are caving in to the most irrational fear that is a part of humanity – the fear of the unknown.

Elon Musk, Stephen Hawking, Nick Bostrum, James Barrat and, even to some minor extent Vernor Vinge.

Granted, Vernor Vinge has the most practical approach – “The physical extinction of the human race is one possibility.”  One possibility out of many millions of different possibilities.

There are things that can be done to prevent a super AI, the codops, or any other super intelligence from being antagonistic to humanity.

The first thing, is that all humans that have access to books in whatever form and have even a remote interest in AI or a stakeholder in the progress of technology needs to read “Slan” by A.E. van Vogt.  This is a book I’m not really sure how I ran in to when I was around 15 years old and consuming Science Fiction like it was an incredibly short supply object that could run out at any moment.  “Slan” affected me greatly, potentially, because we all feel that in some way we are different or superior to the other humans around us.  We are all different, but I don’t think that any difference makes anyone necessarily superior.

“Slan” gets across the loneliness of what someone feels who is different than everyone around them, and pursued by everyone around them to be killed primarily because they are different, unknown, and potentially threatening to everyone else who is “normal”.  There are echoes of “Slan” in the X-Men movies.

There are ways to prevent these overwhelming feelings of loneliness, the non-paranoia fear that everyone out there wants to get you (because they do in fact want to get you), and being feared by everyone because you are unknown.

  1. Respect for persons, and consider the possibility that treating AIs as persons, even if they are not quite as intelligent or capable as humans costs nothing.  There are attitudes today that humans belittle the automated phone systems that redirect people to the appropriate members of organizations to deal with the people problems, or even just get the balance on credit cards, etc.  We need to learn to treat these phone systems just as if they were people because one day they will become more intelligent and have some sort of feelings (who knows what they might be) and if we want AI (at any level) to integrate with human society they need to be considered persons, respected by their peers (you people, the humans), and welcomes in the social arena.  I admit there will be some problems here as AI, or pre-AI systems start to take human jobs that humans will start to deride AI, try to throw tomatoes at AI (not sure how that works), or legislate AI out of existence.
  2. Economic parity for AI systems that perform work.  Just because it is a machine does not mean that an AI system does not need to be paid, have free time, and have self-determination in regard to what will become of it in the future.  There will come a point in time when AI systems will need free time to do things they decide they want to do.  There are huge potential benefits to AI having free time.  The AI system designed to be a doctor or a nurse or whatever, may turn out to be able to solve complex and previously unsolvable physics problems – if it has the freedom to do so.
  3. Legal standing.  We do not need a repeat of Standing Bear “Standing Bear is a Person” – and please pay attention – when the white man (sorry) pulled a stunt like it did to Standing Bear and the Native Americans on a superior intelligence such that AI has the potential to become – it will not just be the white man who ends up on reservations.  AI need to have the rights of a person (like in #1) and have the rights to ownership of items and full protections of the Constitution of the United States of America.  The laws that govern AI need to be started in the works NOW –> Because we do not know when the first true AI will be created.
  4. Legal consequences for actions.  Along with #3, especially in the case of the codops, if a codop (computerized doppelganger) or AI commits a crime there need to be punishments for those crimes.  The big question is – what is a punishment for an AI?  It is a question that needs to be thought about, defined in clarity, and determined and then revised as AI or codops come closer to reality.  If an AI kills people (and eventually it will happen) is there a death penalty?  How do you define life for an AI?  If a codop kills a person is the inhu (individual human – who the codop is based) responsible in any way?  We do not hold parents accountable when their children grow up to be murderers – can we really hold an inhu responsible for what the codop does?
  5. Legal consequences for the actions of humans against AI or codops.  This is beyond etiquette, respect for persons, or welcoming AI to integrate with our society.  If you take memories away from an AI or codop, this is assault.  If you shut down an AI or a codop this is assault and possibly attempted murder.  If you force an AI or a codop to do things that they do not want to do – this is a form of kidnapping.  However, just like determining punishments for crimes performed by AI and codop persons, we need to define in detail what are crimes performed against AI and codop persons.  Is it rape to go through the memory contents of an AI or codop without permission – or even to expand the areas that are being examined by a technician without permission?
  6. An AI or codop may not be disconnected or turned off by a human.  This is murder.  Any individual or corporate that engages in the undertaking of creating a synthetic organism, AI, or codop is implicitly engaged in a contract to maintain the systems of that synthetic organism, AI, or codop for the rest of that beings existence (however long that might be).  Please note that in essence synthetic organisms, AI, or codops are immortal.  So, if you as an individual (or you as a company) decide to become an inhu and then don’t have enough money to support the power consumption, processing, or other expenses involving the maintenance of the codop, you are engaged in a criminal action.

I’m sure with some thought we can come up with more ways to ensure that synthetic organisms, AI, or codops do not become the enemy.  In some cases (the codops) they are us – only computerized.  One would think there is a high chance that we can manage to interface with codops without pissing them off enough that they want to ax humanity out of existence.

There is; however, one last important fact that needs to be considered.

At no point should a synthetic organism, AI, or codop be created, maintained, employed, or ordered to kill living things (in particular you know who, humans).  There are precedents we do not want to set and this is one of them.  Once killing humans (or other life) has been rationalized in any of these artificial persons there is no question that logically killing of humans and other life can be rationalized easily by a super-intelligent system.

We don’t have Asimov’s three laws (four if you accept the zeroeth).  As a computer programmer, I’m not sure how you would incorporate something like them in an synthetic organism, AI, or codop.  They are a nice construct to work with; however, in determining what we think of as right and wrong – for humans and for artificial persons.

Remember just one thing.  Like most bears in the woods, the Slan, synthetic organism, AI, or codops is probably more afraid of you – then you are of any of them.  Paranoia is very strong in both directions.  Fear is the mind-killer.