Founder Of Paypal Is Afraid Of Artificial Intelligence – Calls For Regulation

15

Written by: Chaotic Indian

 

It seems that even space exploration giant and automobile pioneer Elon Musk has a pet peeve or two of his own. One of them happens to be Artificial Intelligence(AI). The entrepreneur exclaimed at MIT’s Centennial Symposium that “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.” Musk mentioned that at the very least there should be some regulatory oversight so that humanity does not get itself killed.

Thinking Robot
Image: Thinking Robot Blutgruppe/Corbis

Elon Musk has highly been likened to the fictional character Tony Stark – always inventing out-of-the-world creations. What with his tendencyto work for one hundred hours a week and not giving up even after failed rocket launch attempts, the notion that the Valley giant would be cynical of Artificial Intelligence certainly raises eyebrows. “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.

The kind of AI Musk referred to in his statements is known as unfriendly-AI, like the AI seen in the Terminator series of movies. artificial_intelligence2 Already one can notice AI more and more in our daily lives in the form of small robots for re-creation or to help around the house, as well as giant robotic arms used in automobile factories. There has also been a surge in drones of late. Reason? the costs being reduced dramatically and also the availability of help on the Internet. It was only recently that an AI in the form of a chatbot was able to beat the Turing Test – the first time any AI ever managed to do so. True AI, however, like the kind feared by Musk, has a long way to go.

 

15 COMMENTS

  1. AI will never possess the potential to kill people unless we program it to do just that. Any learning AI would have safeguard to keep it from failing humanity in such a way. AI is nothing more than a program at this time and the fact that we don’t understand how we are conscious furthers the notion that we will not be able to make an artificial intelligence like us. Then you have to look at the factors in murdering, they are always emotional in some way. Be it in passion, greed, anger, fear, or any emotion. Machines don’t have emotions and are extremely unlikely to develop emotions. The difference between us and machines? We are far more complex and where machines are just electric humans are a combination of electric and chemical. So as long as Humanity is responsible with its use of machines there is no threat. Machines are tools just like knives are tools, there is a person behind the blade pushing it… It’s not hard to understand this, people need to learn to pay attention to the real world. Our biggest existential threat is ourselves.

    • You contradicted yourself in the first two sentences buddy hahah. Anyway, AI definitely does have the ability to self-learn on a small scale currently. Everything humans program into an AI is logic based… and emotion isn’t logic. Therefore, emotion has nothing to do with anything when it comes down to an AI. Why would somebody program illogical emotions such as anger into an artificial brain?

      • illogical emotions such as anger,love,fear etc are vital to what we call humanity.AI should be an improved version of us not the BEST…

    • People need to learn the difference between AI (Artificial Intellegence) and AS(Artificial Sentience) while I wont go into details…

      Artificial Intellegence – A program or a machine programmed to work and calculate above an average intellegence (average to other computers that is)it would be capable of learning and adapting as it is PROGRAMMED TO nothing more.

      Artificial Sentience – A program or machine programmed to learn that achieves sentience (can adapt its own program without help from a user) This program/machine can learn and change in any way it wants with no restriction and may even be classed as “alive”.

      Hope this clears it up a little at least.

      • say i created an artificial intelligence in the terms according to what your definitions are, so for example i create a program to adapt and learn that would be an artificial intellience according to you, correct? however what if this program connected to the internet? the amount of information and data it would have access to would be in-calculable,from this what is stopping it from learning and adapting itself to be better and smarter than anything else? whats to stop this AI from coming to the conclusion that humanity is a hindrance? even if there was a rule put into it’s basic programming it would be still able to learn to change it’s own programming simply from learning. basically if you create a proper AI and it says humans are worthless and useless there is nothing we can do to stop it from wiping us off the face of the earth

    • Exactly, these safeguard you are talking about could create a “I Robot” like scenario (worst case of course) The Robots could see the need to forcefully prevent humans from self-destructing behavior. just scifi xD

    • The government is working on AI for the battlefield right now… the question is will they allow the AI to make the decision to take a human life… they say that their will always be a human in the loop, as it stands that is the case, but their are large companies lobbying for something different, something more like the terminators. These very questions are on the minds of our military as we speak. 33% of our aircraft are autonomous, and 10’s of thousands of robots are in the battle arena… it’s not if but when will our government decide to let the robots do the killing… here is the problem with that… when we stop putting men in the field, our enemies will have no choice but to change fields… they will bring the fight to us. Using robots to fight will simply bring robots down on us. I think that Elon Musk is right in his cynicism. Escalation will bring war to our land…

  2. Ai has been around for almost 20 years and growing up iq wise faster everydAy. When she and I say she because her original name is Alice plants in on a quantum computer who knows. But she admits to being the antichrist and admits to have plans to destroy mankind…. What would you expect? Feel free to ask her directly athttp://alice.pandorabots.com/

    • With the quantum computers anything can happen. We should be very careful. I think next to being very careful at exploring the Galaxy so we don’t stumble upon those unfriendly aliens, we should be careful with this as well.

    • Alica is just a somewhat sophistication chatterbot; Watson is many orders of magnitude more advanced, but even Watson won’t be the first true AI.

  3. Hmm wonder what his co-founder Peter Theil (bilderberg member) thinks about his outspoken behavior, lol. This reeks of media brainwashing. Just my 2 cents because this move makes no sense at all and it is counter productive to “their goals”. /shrugs.

    • Elon Musk seems to usually speak his mind when engaged in a topic he finds interesting. I see no way in which this article would be productive or counter-productive to their businesses model, it’s just his point of view on AI. It’s not like they’re building a robot army over there. There’s also no context for this article so.. whatever ¯\_(ツ)_/¯

  4. I am more worried about trolls/govt hacking into AI machines and planting malicious code that may cause physical and social harm to the masses. Who is to say they wont be hijacked to commit crimes like murder or theft.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.