“It’s vector analysis. Get the trajectory right, and you land on the moon. Get it wrong, and you hit the hillside.”
AGI is heading for the hillside.
Many experts are exaggerating artificial intelligence’s power and peril. Most public commentary, ranging from politicians to prominent technologists, says AI is close to surpassing human intelligence. The significant risk is that artificial intelligence will supplant human beings. This speculation even comes from Geoffrey Hinton, a Nobel prize-winning AI pioneer. There is continued frenzy and alarm over AI’s existential threats.
Existential Threats and Nonsense
AI models are useful, but, as Yann LeCun, Meta’s AI head, says, they are far from even rivaling a house cat’s intelligence, let alone humans. The talk that AI will become so powerful that it poses an existential threat to human beings is nonsense.
AI is not superhuman and is not a looming danger to humanity. A superhuman artificial intelligence that humans can use maliciously or develop malicious intent is unrealistic.
AI is a powerful tool and is becoming enormously important in all aspects of the economy. It is integral to almost every aspect of any technological development. It will have a continuous and enormous impact that will permeate all aspects of business, the economy, and our personal lives, offering promising solutions and advancements.
AI Has No Intent
Today’s artificial intelligence is not intelligent in any meaningful sense. Recent developments are extrapolated to absurd levels, making these speculative predictions far-fetched. Tens of billions of dollars are being poured into large language model-based artificial intelligence, and these companies claim to be creating “artificial general intelligence” (AGI) that will broadly exceed human-level intelligence. Predictions from some of AI’s leading proponents say this will happen within just a few years. However, it’s important to remember that these systems are significantly lacking and are not on a trajectory that the proponents of AGI claim.
AGI is a dream that supposedly involves a mental model of the physical world, persistent memory, reasoning ability, and planning capability. None of these exist today, and regardless of the publicity and alarmist pronouncements, there is no prospect for this in the future. There is a yawning gap not easily crossed to reach AGI’s goals.
The Right Approach To AI
Large language models have natural limitations that cannot be overcome simply by applying more data and processing power. While AGI may be a worthy goal, AI systems requiring human-level characteristics, including common sense, that can be humanlike assistants are decades away. Today’s approach won’t get us there.
Generative AI models, powered by LLMs, train on enormous amounts of data to mimic human expression. While these models are becoming more powerful, simply pouring more chips and data into developing more AI systems will not make them more capable. They are not on a trajectory to ultimately match or exceed human intelligence. This is the logic behind the massive investment in AI to build more fantastic pools of specialized chips and data and combine them to train new AI models.
It will fail.
These AI systems lack effective design. Their scale is enormous, but that is misleading. No matter how many processors and data centers exist, current AI models will not develop artificial general intelligence because the LLM approach’s fundamental model does not scale to human-level intelligence.
A Different Path
Learning is not done via text. Learning is done via interaction with the physical world in three dimensions and building a model from visual information. Large language models and “AI agents” might someday have a small role in AI systems, but those systems must be built on a different matrix of techniques and algorithms.
Inadequate Models
Today’s models predict the next word in a text. They’re very good at this and can fool us into thinking there is some reasoning and common sense behind it. However, their enormous memory capacity and ability to choose effectively is misleading. They are simply regurgitating information they’ve been trained on.
The Chinese Room
A familiar example is someone in a closed room with no knowledge of Mandarin yet books of Mandarin symbols that lead to other Mandarin symbols, essentially inputs and outputs. People who do speak Mandarin are outside the room, sliding in various symbols that form inputs. The individual in the room looks at the symbols and then hands the output, as shown in the book. This is coherent to any Mandarin speaker. But, it’s clear that the person inside the room has no knowledge or understanding of Mandarin and cannot communicate in the language. They are simply regurgitating information that the books convey. This is what large language models do.
LLMs regurgitate information but have no understanding.
The false narrative is that an entity, whether a person or a computer, can express and manipulate language and is intelligent. That’s not true. Language can be manipulated without fundamental understanding or knowledge—essentially, it is not smart. Current large language models demonstrate this. New, three-dimensional models built on visual data and real-world interaction are required, and this is not happening anytime soon.
AGI is not in our near-term future.