Nicholas Mitsakos :

The Robot Armies are not Coming 

Artificial General Intelligence (AGI) has become the latest hot topic in AI. It is defined as the intelligence of a machine that can understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research, and it receives reasonable intellectual debate from the likes of Kai-fu Lee in his new book, “AI Superpowers.” Kai-fu Lee argues that AGI represents unprecedented developments in AI, and therefore dramatic changes will be happening much sooner than many of us expected.

I disagree with this assessment and believe that there are more limitations and barriers, based on available data, processing power, and expense, along with the general limitations of machine learning programs that require narrowly defined solutions. The breadth of human intellect and reasoning cannot be so narrowly defined, so it is challenging to draw a straight line from current developments and achievements (impressive and notable, for sure – such as Deep Mind and other well-documented accomplishments) to a future dominated and controlled by AGI. Even though it is a common topic and pursued vigorously by some of the brightest minds working today, some of the projections still belong more in science fiction and blue sky futures studies.

Flexible and Reasonable?

One of the big questions in artificial intelligence is to what extent machines can become intelligent, and to what extent they can mirror the capabilities of the human brain. Some debates on artificial intelligence begin with the Turing test developed in the 20th century, which simply asks if a computer could fool a human into thinking they were communicating with another human, when in fact, they were communicating with the machine. When asked to reason or think flexibly, machines fail, regardless of the depth of data and strength of processing power.

Some companies, for instance, DeepMind and OpenAI, claim their objective is to develop AGI and match human intelligence. This may be a great objective, but we are still very, very far from that. More importantly, I believe the concept of AGI is not really that meaningful or interesting.

Intelligence isn’t Intelligence

It is assumed that AGI is human intelligence. But thinking that AGI is a surrogate for human intelligence misses an important point – human intelligence is not general. Individuals possess specific knowledge with varying degrees of depth across a broad range of topics, but the depth and comparability are hardly common among people, and certainly, no general assumption of knowledge can be assumed among humans. Machines can never replicate this variety, depth, range of experience and perspective. General intelligence is not general.

Another assumption is that AGI can produce a singularity – that is, AGI itself will produce intelligence that is ever-improving, making itself better cycle after cycle. It will become an intelligence that can keep improving, developing increasing intellectual processing power, enhancing its capabilities, generating more intelligence, and so on. But this is the stuff of myth because there is no such capability in the real world. Humans cannot spiral their intelligence endlessly. There is simply a natural physical limitation to what a human being can do. A network of processing power does not have a limitless potential for improvement.

Deep learning and AI have a lot of limitations.

  1. We are very far from human intelligence – AI cannot reason or deal effectively with truly dynamic situations – in other words, the real world.
  2. It also propagates human bias. It expands on data that it is fed, and far too often, that data is biased. The source ultimately is manipulated by imperfect human beings, and this bias is then multiplied exponentially within an AI system.
  3. AI, Deep Learning and, ultimately, AGI, are not easy to explain. These systems do not have common sense. They are pattern matching and do not represent robust semantic understanding.

To be fair, we are making progress in addressing some of these limitations, and the field is still progressing pretty fast and some of the world’s top researchers, with almost unlimited budgets from the major technology companies, are working on these problems feverishly and making substantial improvements. There are also specific and powerful applications. To name a few, Deep Learning can be applied to fiendishly difficult mathematical problems, potentially unlocking great insights and quantum physics, string theory, and our general understanding of the universe.

It can also help our understanding of challenging biological problems, such as the folding of amino acids to form proteins, and in the actions of those proteins signaling biological responses in the human body. This knowledge can lead to dramatic improvements in the quality of life and our understanding of how our world works. These are laudable goals, and we are closer to achieving them through AI and Deep Learning.

Science is not Random

AI is ultimately a system run by computers, so its results should be subjected to the scientific method. That is, we should be able to “see, test, and verify.” In other words, any result should be reproducible. Currently, it is not. This may be one of AI’s most substantial limitations. Ultimately, the future depends on its results being reproducible. Random outcomes from the same data inputs that are not reproducible are unacceptable for any system.

We’ve got no Power, Captain

OpenAI recently noted that the computing power required for advanced AI is doubling every 3 and a half months. Deep Learning requires substantial scale. Better results and the ability to solve a broader range of problems requires massive computing power and processing. It also requires substantial data, access, and sorting of that data, requiring even more computing power. Scaling Deep Learning enables it to work more effectively and solve broader tasks in a better way. Scaling is essential, but clearly the rate of progress is not sustainable. The cost of top experiments is going up 10-fold each year (source: Facebook). Right now, an experiment costing several million dollars might scale to several hundred million dollars according to this trajectory. That is simply not affordable and therefore not possible.  It places a natural limitation on AI progress.

The Wall

It means that at some point we’re going to hit the wall. In many ways we already have. Not every area has reached the limit of scaling, but in most places, we’re getting to a point where we really need to think in terms of optimization, in terms of cost-benefit, and we really need to look at how we get most out of the computing power we have. This is the real AI world we are going into.

 

 

 

 

 

Share This