Artificial Intelligence: Robots Really Could Take Over the World

Okay, before I begin this one, I’m going to need y’all to forget about movies like The Terminator franchise (despite the pic above, lols), I, Robot, or perhaps the latest Alien releases, featuring that freaky ass robot, David. Put them to the side just for now. They are works of fiction, whereas I’m going to talk about the possibility of machines taking over the world in real life. This is a concept I’ve always been curious about, probably because I was introduced to The Terminator movies from a very young age, and they’ve always stuck with me. The second movie, Judgement Day, was always my favourite, as I have a special childhood bond with Arnold Schwarzenneger’s robot. To this day, I still cry at the end of that movie. He was a hero, man.
But I digress.
The recently deceased Stephen Hawking, among many other well-known figures in science and technology, have repeatedly expressed their worries about artificial intelligence reaching a point where it could no longer be controlled by humans, with Hawking even warning that AI could one day ‘spell the end of the human race.’ Obviously this seems a bit dramatic, even coming from AI experts, but that doesn’t remove the possibility of them being right. A couple of days ago I visited Google’s headquarters in London, and had the opportunity to ask some questions about the future of AI. So far, technology has advanced forward but according to the crew at Google, technology is now advancing upwards, unlike ever before. Put simply, we are on the cusp of a changing world thanks to AI.
The AI we have achieved so far is properly known as ‘narrow’ or ‘weak’ AI as it is designed to perform a narrow task (e.g. smart cars, video games and virtual assistants like Apple’s Siri). But the ultimate goal of many masters of AI is to create the more complex form of ‘strong’ AI (aka AGI/Artificial General Intelligence) that would transcend humans in cognitive abilities. Creating smarter AI that goes far beyond any human intellect could lead to its creation of even more intelligent AI, resulting in an intelligence explosion. Such superintelligence could potentially be beneficial to humanity by helping us to achieve, for example, the eradication of disease and poverty. However, it has been pointed out that to avoid any negative consequences of this superintelligence, we must ensure that the AI’s objectives match our own, before it becomes superintelligent.
Earlier I mentioned the android, David, from the more recent Alien movies. Let’s go back to him. David is arguably one of the most human-like androids you will ever see in film, and I’m not just talking about his physical appearance. David exhibits varying emotions in the movies, such as fear, disappointment and, in his own weirdo way, love. He thinks and acts independently of humans, selfishly serving his own interests. The major uh-oh for humans is that David does not like or respect humanity, so much so that he actively tries to stop the spread of the human race, as he believes they’re inferior, labelling them ‘a dying species.’ But could this really happen with AI in reality? The short answer is no. The majority of experts agree that AI will not be capable of exhibiting human emotions, which means there’s no reason to worry about AI becoming malevolent and turning into real-life Davids, so to speak.
However, there are other, likelier ways that AI could pose a threat to us. Autonomous weapons are a form of AI, and if they fall into the wrong hands they can be programmed to do a great deal of harm, though this would still require human involvement, so the danger really lies with whoever programmed it rather than the AI itself.
Another way AI could pose a threat is to do with its competence in executing tasks. Since AI doesn’t possess the same principles as humans, it could cause unintentional damage when carrying out its primary consideration, that being whatever task it has been given. I’ll use self-driving cars as an example because they’ve been a hot topic for a while now. If you tell a (intelligent) self-driving car to get you to your destination as quickly as possible, it may interpret this literally and do exactly what you asked for by speeding off, without consideration for your well-being. This is why we need to ensure that the AI's objectives match our own.
In the worst case scenario, something like what happens with Viki, the AI in I, Robot could take place. AI that is tasked with the safety/security of humans could end up carrying out this task in a way that is destructive. Well, this is at least a much more likely scenario than AI becoming self-aware and posing an existential threat to humanity, as is the case in The Terminator movies.
So, where does all this leave us humans? Personally, I’ve always believed that theoretically, the world could indeed be taken over by AI one day, based on this logic: the world has always been run by humans because our intelligence is the most superior among all animals. Now, we’re moving towards creating and developing AI that can surpass human intelligence, pushing us down to second place. If intelligence is what determines which life form dominates the world, then the possibility of AI replacing us as the dominant life form becomes very real.
Since AI has the potential to become more intelligent than humans, we don’t know with any certainty what the consequences of it will be, as we have never before created anything that has the capacity to outsmart us. We need to be certain that superintelligent AI can be controlled by us, before it's created. If we give up our position as the smartest in the world, we may also unintentionally give up control of the world too.
Given humanity's propensity for mass destruction, this may not necessarily be a bad thing...