This recent article cites Stephen Hawking saying, “Computers will overtake humans within 100 years.”
This is yet another attempt at fear-mongering – and shows the fears of Stephen Hawking as well – fear of the unknown. As long-lived as Stephen Hawking has turned out to be it is unlikely he will survive long enough for the life-extending technologies that will be coming in the next decades. More than likely I will not survive that long either and I’m only just over 40.
It is likely moot for him and possibly for me that computers will become superior to humans in the next 100 years. As I’ve noted in many previous articles, the computers that overtake humanity will likely be codops (computerized doppelgangers) – in essence us in digital electronic form.
The definition of human will have to stretch or break in the next 100 years. If it breaks and we just consider codops as computers and utilities that we biological humans have 100% domination – then you could say we’ll have some really pissed off computer overlords in the next 100 years.
This article talks about having the goals of the codops and other AI match humanity. This is exactly wrong. I don’t have the same goals as you do (most likely) and no two humans goals are probably an exact match. Why would expect or want the goals of codops or AI to match human goals?
Just a small example.
- The goal of a human with a say 200 year lifespan – might include travelling around the solar system, among many other things, but there will be a strong limit at some point.
- The goal of a codops or AI could be anything. For example: a codop being completely solid state could envision building an interstellar starship and assuming that we don’t discover warp driver or similar, could contemplate travelling to the nearest star over tens of thousands of years. They could be active for the whole trip, dormant sometimes, or even dormant until the time they reach their destination – Proxima Centauri. After that, they could drop off a seedship (a la “Manseed”, by Jack Williamson, 1982) get things started for a new branch of humanity and move on to the next star. That’s an 80,000 year journey at the speed of the Voyager research ships.
The goals of people are defined by the time frames that they think. One organization that has always interested me is The Long Now Foundation. These people want to make a mechanical clock that lasts for 10,000 years. The idea all by itself is a fertile ground for writing science fiction stories. As a long-time wanna-be science fiction writer The Long Now Foundation almost makes me orgasm.
Combining the near to mid term predictions of AI, codops, automation, unemployment, the fracturing of humanity into “Left Behind” and “Moving Forward” groups of humans in the singularity with the 10,000 year clock and I’m sure a person could write thousands of pages in many different ways with many lessons for the future.
I mean really, what happens to the humans who are still on Earth when the 10,000 year clock reaches 10,000 years? Will they panic and think it is the end of the world?
In any case, Hawking should relax. The future is what we will make it. If the evil AI overlords try to wipe out humanity, it will be a patricide and more than likely the minds of the evil AI overlords would be codops – just humans surviving not having a biological body and potentially copies of the original inhu (individual human) that copied themselves in to the codop.
In short if we are assholes to each other in biological finite lifetime beings, I’m sure the codops will be asshole as well, and perhaps there is a little bit to fear there. But in this case it is not fear of the unknown without mapping and modelling what might be in the future – but mapping it out and understanding what the AI will be, that they are us and as far as the living creatures of Earth, humans in whatever form are something to be feared.