This article was written by Nicholas Mitsakos : Chairman and CEO at Arcadia Capital Group.

The Hype Machine

The history of AI shows that attempts to build human understanding into computers rarely work. Instead, most of the field’s progress has come from the combination of ever-increasing computer power and exponential growth in available data. Essentially, the ability to bring ever more brute computational force to bear on a problem-focused on larger data sets have given increasing usefulness. But, it’s limitations are also magnified in sharp relief more than ever. The bitter lesson is that the actual contents of human minds are tremendously, irredeemably complex…They are not what should be built into machines.

Machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence like some C-3PO. It’s a sophisticated pattern-matching tool.

Life and death exist at the edges. That is situations that are unique and out of the ordinary. Essentially, the “fat tail” events that determine success or failure, and sometimes, life or death. Humans are better able to cope with such oddities because they can use “top-down” reasoning about the way the world works to guide them in situations where “bottom-up” signals from their senses are ambiguous or incomplete. AI systems mostly lack that capacity and are, in a sense, working with only half a brain. Though they are competent in their comfort zone, even trivial changes can be problematic. In the absence of the capacity to reason and generalize, computers are imprisoned by the same data that make them work in the first place. These systems are fundamentally brittle, and always break down at the edges where performance is essential and consequences much direr.

Biological brains learn from far richer data-sets than machines. Artificial language models are trained solely on large quantities of text or speech. But a baby can rely on sounds, tone of voice, or tracking what its parents are looking at, as well as a rich physical environment to help it anchor abstract concepts in the real world. This shades into an old idea in AI research called “embodied cognition”, which holds that if minds are to understand the world properly, they need to be fully embodied in it, not confined to an abstracted existence as pulses of electricity in a data center.

Biology offers other ideas, too. The current AI generation has models that begin as blank slates, with no hand-crafted hints built-in by their creators. But all animals are born with a structure in their brains. That’s instinct, and it drives development, judgment, and effective decision-making much more efficiently – and beyond the capability of artificial intelligence and machine learning systems.

The problem is deep-learning approaches are fundamentally statistical, linking inputs to outputs in ways specified by their training data. That leaves them unable to cope with edge cases –  the unusual circumstances described above that are not common in those training data. Many real-life applications full of such unusual circumstances, especially driving, which can be full of strange occurrences ranging from an individual suddenly appearing to odd weather conditions blurring vision. Such oddities are easily handled by human beings but can be insurmountable challenges for machine learning systems. There are many potential applications that can be effective and useful tools. They are simply much less ambitious than the current hype would indicate, but they are also far more realistic.

 

 

Share This