To be or not to be?

March 12, 2015

Transcendence, Ex-Machina, The Matrix, Terminator, Blade Runner and Star Wars: the list of movies where human capabilities are being compared, challenged and questioned by computers and software is long. For a number of years Artificial Intelligence (AI) has been a much-debated topic and for many individuals it is not only a frightening thought of how a “machine” can be so similar to a human, but it also raises many elementary questions about our existence and our future boundaries – if there are any.

Intelligence exhibited by machines

The origin of the expression “Artificial Intelligence” dates back to 1955 and refers to intelligence exhibited by machines or software. Since then, the field has developed rapidly and in 1965 Dr Herbert Simon, one of the founders of AI, said: “Machines will be capable in 20 years of doing any work a man can do”.

Others in the field have claimed that: “within a generation…the problem of creating Artificial Intelligence will be substantially solved”. Not surprisingly, this has raised a number of questions, some of which occur more frequently than others: when will computers be able to emulate humans and become self-aware and intelligent? How can we control the usage of Artificial Intelligence? Now, 60 years down the line, it is worth considering whether the experts were right? And if so, where are we now heading?

Throughout the years a lot has happened in the field of AI. Over the last four decades, systems and robots have replaced humans largely through the automation of manual production and replacement of clerical roles by computers and software. The involvement of technology companies across all industries has pushed this development further as companies such as Google and Facebook are acquiring AI firms and AI experts to solve complex matters which would previously have taken many years to perform and proven costly. Today, the most advanced use of AI techniques is natural language processing; to work out what we mean when we say or write a command in colloquial language – or in other words, Siri for iPhone users or Google’s Google Now.

Spell the end of the human race

In Silicon Valley the debate regarding AI has been intense. Some time ago the former founder of Paypal and chairman of Tesla Motors, Elon Musk, described AI as mankind’s “biggest existential threat” and continued by saying “…we need to be very careful”. Professor Stephen Hawking added to this by telling the BBC in January that Artificial Intelligence systems could “spell the end of the human race”. There are, of course, those who disagree. Among those, Microsoft research chief Eric Horvitz, who by contrast said: “I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from giving machines intelligence from science to education to economics to daily life.”

When reviewing what is being said in the media, it is clear that the real change is still to come. At the end of last year, Goldman Sachs hosted a $15m funding round for a company that focuses on financial data services using AI techniques to deliver financial analysis. Kensho, the company in question, delivers analysis at a rate at which no human can match. This is the first large investment by a global bank in AI and as the desire to deliver services at increasing speeds rises, more institutions will surely follow suit.

AI is still something intangible

To many, AI is still something intangible that only exists in Sci-fi movies, where a system or computer talks and acts but where the human race conquers in the end. However, developing technical innovations and pushing boundaries is something that we humans have done since the beginning of our time. That we have come to where we are today should therefore not be a surprise. One could almost argue that it is in our genes to always strive for better and faster ways of doing things to further improve our standard of living.

At some stage however, a balance will have to be found between human and Artificial Intelligence. If it isn’t, we risk “technological singularity hypothesis”, a term given to the theory that “accelerating progress in technologies will cause a runaway effect wherein Artificial Intelligence will exceed human intellectual capacity and control, thus radically changing civilisation in an event called the singularity. “Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavourable, or even unfathomable.”

Robot capable of performing tasks in dangerous places

This is, of course, just a theory. In the meantime, sci-fi movies will continue to encourage us to question what we strive for, and whether a life and environment based on software and computers is the route we want to go down. As these innovations are directly correlated to economies and economic outlooks it might open doors to new opportunities for many people. Take Honda for example, who after the earthquake in Japan in 2011 developed a robot which is capable of performing tasks in dangerous places on behalf of people. Such support could without doubt ease matters for many people working in dangerous areas. At the same time these developments might close doors for others which will of course stir up the debate further.

This leads us into another area, employment. Nowadays the demand for educated people is higher than ever. As companies, regardless of what industry, are working to replace human labour with more efficient computers, one could wonder what is the point of spending money on an expensive education when the number of manual labour positions in the marketplace is decreasing. Who will eventually decide where to draw the line? All these elementary questions can be hard to grasp and it is no surprise that opinions differ widely. In fact, as one of our times’ most well-known poets once said: to be or not to be – that might be the question.