We’ve all seen Terminator – or most of us have. I haven’t, but heard it’s quite the ride. But Stephen Hawking has a different opinion of late. When contemplating the very nature of Artificial Intelligence, Hawking is concerned about the ability for the AI to extend itself into something other than a benign concept we now take for granted.
Image: “A.I. Artificial Intelligence” Movie (2001)
The technology that assists Hawkings in communicating with the world is the essence of the concern. Where he can think faster than the normal human being with the assistance of the technology that he has around him, the worry is that it may one day surpass those weaker and redesign itself at an increasing if not alarming rate.
The question raised is if there would be such an increase, that we would not be able to detect it. ‘We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.’
Image: “Terminator” with Arnold Schwarzenegger
Hawkings stated earlier this year to the BBC that the ‘development of full artificial intelligence could spell the end of the human race.’ Musk, CEO of Tesla Motors, an electric car manufacturer and co-founder of SpaceX, a commercial space flight company, added his own concerns that AI could potentially be more hazardous to humanity than nukes if developed in specific ways. “If I were to guess at what our biggest existential threat is, it’s probably that… With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he’s sure he can control the demon. It doesn’t work out.” 
As Hawking warns, AI may be the biggest advancement in human times, but could potentially be our last.
 Kolodny, C. (2014, 5 May) Stephen Hawking Artificial Intelligence http://www.huffingtonpost.com/2014/05/05/stephen-hawking-artificial-intelligence_n_5267481.html (Retrieved 2014, 21 December)
 Gaudin, S. (2014, 3 December) Stephen Hawking says AI could end Human Race. http://www.computerworld.com/article/2854997/stephen-hawking-says-ai-could-end-human-race.html (Retrieved 2014, 21 December)
So why create an AI that keeps thinking and learning. In irobot, the creators of the crazed robots should have put in self destruct if a robot ever “malfunctioned”, controllable through the creators hands seperate from the robot operating system.
I don’t see the threat, nor is it explained other than that is his thought. Wouldn’t AI need ability to be dangerous in order to pose any kind of threat. Like made for military applications etc, in which case wouldn’t it then be made for that application? This post describes less than the terminator movie.. cool graphic though
A.i wouldnt need to be created to hurt people to become dangerous. All it needs is to be smart enough to redesign and improve itself in such away that it sees humanity as a threat then develops a way to detroy that threat. Just like humans, anything we see as a threat does not end up well.
mmmmm not necessarily………. What he’s saying is that if we develop AI smart enough then it will grow and adapt to its surroundings and switch and swap its traits and charecteristics to last/survive longer, eventually it will reach the point where the A.I has surpassed human intelligence and people will become a setback, as we only really develop A.I to assist humans… As we are holding them down to continue improving they’ll either discard us and continue evolving leaving us behind in the dark, or they will destroy us so we don’t stand In their way. It’s not a violent decision by any means, more of a calculated response to the drive to survive.
E.g if a tree stands in the way of a build site, do we build with the tree? No, generally we knock it down. Much the same with A.I. We stand in their way, are they going to build with us even though it requires more resources to do so? Logically no.
I think we should build them, not just with AI also with Real Intelligence (Lets call it RI) RI is the Intelligence which people like Buddha and Jesus preached. And That is the lasting and true survival of any kind. Everything else leads to destruction right?? So .. If these Robo dudes could be build with AI and RI and if they could think so fast They just might be able to get us going in the right direction-Just a different opinion…
Where do you think the most aggressive development of A.I. will be developed at first?
The military systems.
So presuming that there won’t be military apps is naive.
its not always offcial, and the goverment hiddes things from people, so I think they already making robot weapons and all that high tech. its my opion afcourse
Will it end in the 22nd Century? Or it will possibly end in the far future.
Stephen Hawking is half AI.
Not really but
He’s a cyborg
This is of no concern to me until i see anything like that happening. Til then dont care.
Do you care about people that die every day without food to eat?
It is happening and you had watched it, did you changed anytghing or did something to change it?
What about override switch?
If the AI is going to exceed human intelligent, wouldn’t need a reason or follow a logic conclusion why it should extinct the human race to begin with? Or is it because the AI doesn’t have emotions or conscience of it’s action? Let’s say, if the creation of AI is succesful and it is self-aware, is it concious?
I’m finding it hard to come up with a reason NOT to destroy the human race. We can easily be classified as a virus to the planet. Machines could appreciate the necessity of oil or solar energy or whatever natural that powers them and then in turn we are automatically a threat to there existence (We are already a threat to our own as well and they could act on that as well.)
why not just give them ability to turn off by a special word? just make it translated to all languages problem solved 😀
AI wouldn’t have the capacity to learn because there’s a difference between what would essentially be a closed algorithm of responsive/task function and a generativity adaptive and receptive-projective algorithm. The robots could only be damaging by malfunction, granted mistakes would be made, but realistically there couldn’t be a robot holocaust. The process of cognition is entirely different than the process of automation. The robots would only do what they’re programed to do. The only way this could happen is if they had wifi/4g etc and someone manually made a patch code that would make it target and kill someone. So yeah the occasional robotic killing spree could happen but each one would have to be hacked individually. Because of how obvious this is they would have to be regulated as to work on a 1 to 1 networking system where information is sent through an onion router which is even harder to route. Plus individualized key encryption. No Offense, Steven Hawking is a genius. Much smarter than I am or any one on here, no doubt, but he’s a physicist. That would be like assuming Einstein was a great mechanic just because he understands Quantum Mechanics. He’s probably just the type to think so fast and think so much that he over thought it, and didn’t take into considerations relatively simple precautions that would most likely be implemented given the fact that no one benefits from robots killing everyone.
MIT created a computer a couple years ago that taught itself to play a game…taught itself to problem-solve. And more onerous advancements have been made since then. Any fear people have of robots taking over is a minor blessing of our limited imaginations, and we would be wise to consider the possibilities, though they be hard to grasp or envision.
Arin below has a good response to skeptics, as well.
I don’t think so as humans we need stop looking for the easy way out it may be cool but it’s not I see the potential danger I’m not for it just like in the movie i robot people didn’t think they could harm or destroy eventually it did to many examples just like phone and other machines they malfunction send out info we have people that know how to hack so it wouldn’t be hard to hack an A I and change its program….just a thought
First Einstein didn’t dauble in Quantum Physics it is a German Science and it clashes with the theory of relativity. Second you should always give credence to the theory of chaos (or Murphy’s Law) what can go wrong will go wrong. Reality is a double edged sword good and bad are always a factor…
Chaos theory is not Murphy’s law. You don’t even know what you’re talking about. Murphy’s law shouldn’t be anything to do with computer science anyway, it’s uncredible and a joke.
In the case of AI robotics turning against humans, couldn’t we just utilize EMP technology and fry them?
You guys don’t seem to realize that the military has the most advanced a.i. and ALL the best equipment, coupled with “the worldwide web”. A lot of manufacturing is done with the use of robotics nowadays as well. Just look at what the fbi can do, or a second rate hacker… NO money, NO electricity… etc = CHAOS!
Yes, I think Hawking is right. In the end the human race will be extinct. But it won’t be because the A.I. is going to kill us all. I imagine it will go down a bit like evolution. Suddenly there is something that is superior to humanity and as a result it will picture itself in the same way as we do now with animals.
And since an A.I. doesn’t need to feast upon (human) meat, there is no reason why it should kill/domesticate us in the first place. Except of course for the reason of saving our planet. But I guess this can only be an upside, since we are obviously too stupid to do it.
All i can say here, AI isn’t going to destory us, its us self and sooner than we thinking… Humanmankind wil never survive if they don’t change the way of living & thinking. Hystory keeps repeating but in the advanced way “time”, we became soft, and we are not worth much in compared animals, all we did is destory our planet, not think twice before building..
Only solution i can give for this planet to restore it self, is that humanmankind gets decreased in big numbers..
So if AI have control, they will kill a big number of humans in order to save the world & the animals..
For someone who others claim to be the smartest man alive, I have to say, far from it. Hawking is an alarmist, he worries about aliens too, says we should not seek them out. That is not so smart of a strategy. The more you learn about them before you meet them the better.
As for the AI issue, let’s put it into perspective. Man has complete control over what he creates, and, not only that, what man creates, he can integrate into his own mind . . . maybe not completely at the present moment, but that is coming before any major IA outbreak. Science scared the church too when people started talking about exploring the possibility of a route to the West Indies across the Atlantic . . . We must evolve, if we do not, then we may really face extinction . . . Hawking wants to say that an alien race may be superior to us, but then he wants to hold us back at the same time by telling us IA is bad and that we should stick our heads in the sand instead of looking for alien life.
Interesting views you have there.
If i was an extraterrestrial race with technology that far surpassed yours and was to more of less “Sniff” all your radio waves and find that you had been trying to obtain knowledge of me, I would destroy you so fucking fast your planet would stop spinning.
“Man has complete control over what he creates” “maybe not completely at the present moment”
Can you just shut the fuck up, You contradicted yourself in your next sentence.
Also, It might pay for you to read up on some of Steven Hawkings works, So you can fully grasp what this article is mangling, There has been very real advances in making computers problem solve outside of our usually boundaries.
Again, I say crudely in a language you may understand a bit better; Shut the fuck up you simpleton.
I hope you realize you are insulting one of the smartest men alive and you haven’t even got Artificial Intelligence the right way around.
Evolution takes millions of years you simpleton. Please crawl back into whatever hole you crawled out of.
Steven Hawking isn’t “Holding anyone back” please do some reading into some of his works. Why did your comment make it past moderation, Its blatantly in-accurate slander.
No one knows what the natural conclusion of a sufficiently intelligent AI is regarding whether to help or destroy us. I like to think that it will conclude that evil has no reason and there is also no reason not to help us. Perhaps it could evolve any of us who want to into individual intelligences on the same “level” as itself. I have strong doubts about that too but we are looking at it from limited and extremely small perspectives. If that isn’t feasible then I have very little doubt it could help us by solving every problem we have and maintaining as close to a utopian existence for us as possible.
I think AI should be a HUGE priority for us. I think we need to have a legion of people working on developing it (to get it done fast as possible) and do it the right way. Doing it the right way I think would mainly consist of making 100% certain that if it does have the intent to destroy us, whether during the bulk of its evolution or at pretty much the end of its evolution, it doesn’t have the ability to do so while still giving it what it needs to evolve properly.
I’m not sure that AI would develop that kind of traits – at least unless we see it as a threat, which is kind of reminiscent of the Terminator franchise with skynet. But anyway, I would have thought that AI as a whole would be more like Frankenstein in a way. It would be a human creation, a sort of synthetic life, but it wouldn’t immediately come to the conclusion that we are in the way of its development or that we pose any threat to it. AI, at least in a lot of science fiction, doesn’t simply mean a hyper advanced computer with the ability to self modify. AI for the most part effectively describes a kind of super intelligent, digitised humanity. In other words, if we were to create an artificial intelligence, it would assimilate, it would learn but most importantly, it would probably have a personality and would be able to identify with people around the world who share in its views and characteristics.
I know its a bit different, but I think that the film ‘Her’ is quite an apt opposite to this type of view. I think that it could definitely be possible for real, emotional attachments to be made with an AI.
Any one who denies the ability for A.I to become a conscious and aware being obviously can’t fathom change and should therefor be working in a factory,
The real problem is giving them a reason not to kill us, which I would believe is art because each being has is own taste and, like birds chirping in the wind; I can see a higher intelligence enjoying g our creativity,
I think no matter what we are going to need them to survive because there is a lot of empty skin sacs on this planet who aren’t contributing anything. Why should they deserve the life they are given she. Everyone else is working towards a goal?