Artificial General Intelligence Is a False Promise
Artificial intelligence is disrupting all software and services – when applied within narrow and specific parameters. It performs useful tasks and provides meaningful information for decision-makers but within well-defined data sets.
AI still has significant limitations, and large language artificial general intelligence programs like ChatGPT (AGI) may not be the big leap forward many imagine. It is not the vanguard of a new era permeating every aspect of our professional, academic, and personal life.
AGI’s usefulness is overstated, and it is not going to happen. Intelligence is not a lumbering statistical engine searching for patterns to generate a useful response. AGI is.
The predictions of AGI are superficial and dubious. True intelligence is the ability to think and express improbable but insightful ideas (e.g., Einstein’s Theory of Relativity, Newton’s Laws of Motion, and many other improbable, insightful, and truly intelligent insights). Machine learning cannot do this.
ChatGPT is a lumbering statistical engine searching for pattern recognition and feeding on incomprehensible terabytes of data and extrapolating the most probable answer.
By contrast, the human mind is an efficient elegant system operating with small amounts of information. Human intelligence creates explanations. It does not infer conclusions by brute force and spurious correlations.
AI models lack the components of an explanation, and therefore, lack true intelligence. The models cannot conclude what is not the case, and what could or could not be the case. It can only conclude, spuriously, what is the case, what was the case, and what will be the case.
In other words, there is a meaningful distinction between descriptions, predictions, explanations, causal explanations, and thinking.
For example, suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct.
But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall because of gravity” (or if you prefer, the curvature of space-time). That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
Machine learning is description and prediction, not cause and effect.
Of course, human thinking can be wrong, but this is part of what it means to think: To be right, it must be possible to be wrong.
Intelligence consists not only of creative conjectures but also of creative criticism. True thinking is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)
But ChatGPT is unlimited in what it can “learn” (which is to say, memorize); it is incapable of distinguishing the possible from the impossible. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
For this reason, the predictions of machine learning systems will always be superficial and dubious. These programs cannot explain or learn just by digesting more and more data.
True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought to be and subjecting those principles to criticism.
To be useful, ChatGPT must be empowered to generate novel-looking output while effectively managing the morality of its output. This is a profound struggle for human intelligence and beyond anything achievable by ChatGPT and most likely never attainable.
In short, ChatGPT is constitutionally unable to balance creativity with constraint. It either produces both truths and falsehoods, endorsing ethical and unethical decisions alike, or is uncommitted to any decision and indifferent to consequences.
ChatGPT is amoral faux science, and incompetent. Perhaps, that’s what makes it popular.
The Robot Armies are not Coming
In spite of these issues, AGI has become the latest hot topic in AI. It is defined as the intelligence of a machine that can understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research, and it receives reasonable intellectual debate from the likes of Kai-fu Lee in his new book, “AI Superpowers.” Kai-fu Lee argues that AGI represents unprecedented developments in AI, and therefore dramatic changes will be happening much sooner than many of us expected.
I disagree.
As discussed above, there are more limitations and barriers, based on available data, processing power, and expense, along with the general limitations of machine learning programs that require narrowly defined solutions. The breadth of human intellect and reasoning cannot be so narrowly defined, so it is challenging to draw a straight line from current developments and achievements (impressive and notable, for sure – such as Deep Mind and other well-documented accomplishments) to a future dominated and controlled by AGI. Even though it is a common topic and pursued vigorously by some of the brightest minds working today, some of the projections still belong more to science fiction and blue-sky futures studies.
Flexible and Reasonable?
One of the big questions in artificial intelligence is to what extent machines can become intelligent, and to what extent they can mirror the capabilities of the human brain. Some debates on artificial intelligence begin with the Turing test developed in the 20th century, which simply asks if a computer could fool a human into thinking they were communicating with another human, when in fact, they were communicating with the machine. When asked to reason or think flexibly, machines fail, regardless of the depth of data and strength of processing power.
Some companies, for instance, DeepMind and OpenAI, claim their objective is to develop AGI and match human intelligence. This may be a great objective, but we are still extremely far from that. More importantly, I believe the concept of AGI is not really that meaningful or interesting.
Intelligence isn’t Intelligence
It is assumed that AGI is human intelligence. But thinking that AGI is a surrogate for human intelligence misses an important point – human intelligence is not general. Individuals possess specific knowledge with varying degrees of depth across a broad range of topics, but depth and comparability are hardly common among people, and certainly, no general assumption of knowledge can be assumed among humans. Machines can never replicate this variety, depth, range of experience, and perspective. General intelligence is not general.
Another assumption is that AGI can produce a singularity – that is, AGI itself will produce intelligence that is ever-improving, making itself better cycle after cycle. It will become an intelligence that can keep improving, developing increasing intellectual processing power, enhancing its capabilities, generating more intelligence, and so on.
But this is the stuff of myth because there is no such capability in the real world. Humans cannot spiral their intelligence endlessly. There is simply a natural physical limitation to what a human being can do. A network of processing power does not have limitless potential for improvement.
Deep learning and AI have a lot of limitations.
- We are very far from human intelligence – AI cannot reason or deal effectively with truly dynamic situations – in other words, the real world.
- It also propagates human bias. It expands on data that it is fed, and far too often, that data is biased. The source ultimately is manipulated by imperfect human beings, and this bias is then multiplied exponentially within an AI system.
- AI, Deep Learning and, ultimately, AGI, are not easy to explain. These systems do not have common sense. They are pattern-matching and do not represent robust semantic understanding.
We are making progress in addressing some of these limitations, and the field is still progressing fast, and some of the world’s top researchers, with almost unlimited budgets from the major technology companies, are working on these problems feverishly and making substantial improvements.
There are also specific and powerful applications. To name a few, Deep Learning can be applied to fiendishly difficult mathematical problems, potentially unlocking great insights and quantum physics, string theory, and our general understanding of the universe.
It can also help our understanding of challenging biological problems, such as the folding of amino acids to form proteins, and in the actions of those proteins signaling biological responses in the human body. This knowledge can lead to dramatic improvements in the quality of life and our understanding of how our world works. These are laudable goals, and we are closer to achieving them through AI and Deep Learning.
Science is not Random
AI is ultimately a system run by computers, so its results should be subjected to the scientific method. That is, we should be able to “see, test, and verify.” In other words, any result should be reproducible. Currently, it is not. This may be one of AI’s most substantial limitations. Ultimately, the future depends on its results being reproducible. Random outcomes from the same data inputs that are not reproducible are unacceptable for any system.
We’ve got no Power, Captain
OpenAI recently noted that the computing power required for advanced AI is doubling every 3 and a half months. Deep Learning requires a substantial scale. Better results and the ability to solve a broader range of problems requires massive computing power and processing. It also requires substantial data, access, and sorting of that data, requiring even more computing power.
Scaling Deep Learning enables it to work more effectively and solve broader tasks in a better way. Scaling is essential, but clearly, the rate of progress is not sustainable. The cost of top experiments is going up 10-fold each year (source: Facebook). Right now, an experiment costing several million dollars might scale to several hundred million dollars according to this trajectory. That is simply not affordable and therefore not possible. It places a natural limitation on AI progress.
The Wall
It means that at some point we’re going to hit the wall. In many ways we already have. Not every area has reached the limit of scaling, but in most places, we’re getting to a point where we really need to think in terms of optimization, in terms of cost-benefit, and we really need to look at how we get the most out of the computing power we have. This is the real AI world we are going into.