Artificial intelligence is often imagined in extremes — utopian dreams of salvation or dystopian fears of extinction. More realistically, AI should be viewed as a normal technology. AI will be transformative, like electricity or the internet. Still, it will unfold over decades, shaped by human institutions, policies, and societal adoption patterns, not by sudden leaps into autonomous superintelligence.
Because of its ubiquity, AI is seen as anything but a normal technology. Credible researchers and executives talk about superintelligence and the potential for software systems to be out of human control, with dangers to humankind equivalent to nuclear proliferation. AI is also permeating engineering, computer science, and other analytical fields. Still, it also touches on creativity and fields requiring human judgment and thoughtfulness, ranging from advanced weaponry to therapy to romantic partners.
All this creates hyperbolic utopian or dystopian visions. AI may usher in a new Renaissance or Industrial Revolution, or create a hellscape of fractured and divided societies, harking back to the divide between landowners and serfs in those systems’ unsustainable economic and societal inequality.
It’s not Superintelligent
If AI is not magic or super intelligent, what is it? It’s a technology, and it will be transformative. But it is not a separate species, highly autonomous, potentially superintelligent entity, and something we’ve never seen before.
AI is a general-purpose technology.
Its application will be ubiquitous, but implementation will be lengthy. Flashy and impressive displays of what AI can do in the lab are very different from real-world applications. Many historical examples exist of other technologies taking decades to develop. The adoption of artificial intelligence will not be a speculative tsunami overwhelming humankind. It won’t automate everything and obviate human beings. It will create opportunities to monitor, verify, and supervise AI.
It won’t create many new problems but will magnify existing ones.
It’s Not an Arms Race
There is no “arms race” that implies one nation can own the “best AI,” and federal governments should limit technologies, access to data centers, and even access to energy to make that happen. The AI arms race is absurd. The knowledge needed to build powerful AI models spreads quickly, and researchers worldwide are already using any new developments or advancements. It is not feasible to keep AI secrets at any reasonable scale.
It’s a Tool
As I wrote in my book, Artificial Intelligence: The Sharpened Rock, AI is a tool that will remain under human control. It is not an independent sentient being. Focusing on AI as an effective tool normalizes it and better prepares us to manage its risks and maximize its benefits.
Many imagine catastrophic scenarios—uncontrolled AI weapons, destructive superintelligent agents, and human obsolescence—which are unlikely occurrences and unproductive fears, especially as a foundation for policy and strategy. This vision is not complacent. It recognizes real risks but places them within a context of gradual diffusion, incremental transformation, and institutional adaptation — concepts historically consistent with previous technological revolutions.
It’s Not That Fast
Invention to innovation, application, and adoption.
AI impacts the real world not when methods improve but when those improvements are translated into applications and diffused across industries. This has typically followed a three-stage process — invention, innovation, and adoption, which follow separate, slower timelines.
Large language models have advanced rapidly, but that’s merely the first in many long steps. Applications lag, especially in safety-critical fields like medicine and finance. This is due to inherent difficulties in deploying complex models safely — validation errors, regulatory hurdles, and the need for societal and organizational change. For example, despite impressive advances, medical AI tools rely on decades-old statistical models rather than deep learning, mainly because safety and auditability matter more than pure predictive performance. AI performance in real-world medicine, led by Epic Systems, the dominant software tool in medicine and electronic health records, has had miserable failures thus far. It doesn’t mean AI will fail in this field, but it does indicate that it will be a lengthy and challenging process. There is no miraculous switch to flip and have sudden “AI everywhere.”
It’s Partially Human and Institutional
There’s a lot more friction.
Even outside high-risk domains, AI adoption faces friction from the slow adaptation of human workflows, habits, organizational structures, and regulatory norms. Diffusion occurs over decades, not years. The development of electricity and the introduction of the personal computer were supposed to be immediately transformative. Instead, these technologies took decades to adopt. AI will be no exception. It will still be transformative, but an imaginary shortened timeline is unrealistic. Availability does not equal adoption. Just because an AI tool is accessible does not mean it will transform productivity overnight. True transformation requires social and institutional changes that historically unfold gradually.
It Has Speed Limits
Despite dramatic improvements in AI capabilities, real-world deployment is constrained by safety requirements, legal standards, data privacy concerns, and the complexity of human systems.
Benchmarks like exam performance, image studies, or algorithmic solutions overstate AI’s true capabilities. Passing a bar exam does not equate to practicing law, and mastering code snippets does not replace real-world software engineering. These benchmarks often measure narrow tasks, not the broader, contextual intelligence needed for professional practice.
Imminent AI-driven economic singularities are misplaced. Robots are not replacing humans, AI systems will not empty office buildings, and robot armies are nowhere to be found. Economic impacts will be uneven, gradual, and highly dependent on sector-specific dynamics and the slow churn of institutional change.
It’s Not an Evolutionary Beast
AI is not biological, and it is not evolving. Contrary to common metaphors, AI is not leaping past humans the way humans surpassed their common ape-like ancestors. Much like the sharpened rock, humankind’s first tool and tool to make tools, it is not raw intelligence but the ability to design, use, coordinate that usefulness, and communicate to develop better tools that makes us powerful and generates the foundation for learning and accelerated development. Similarly, AI systems will not inherently become autonomous agents; instead, they will remain embedded within human institutions and subject to human design and oversight.
It’s Power
Power is the capacity to shape environments and outcomes, not abstract notions of “intelligence.” Power in technology is not sentient and uncontrollable; it is mediated and controlled by humans.
It’s Human
Rather than displacing humans, AI will increasingly redefine human roles toward specification, oversight, and control of AI systems. Just as industrial automation redefined factory work, AI will reshape cognitive and decision-making jobs — but will not eliminate the need for humans. Monitoring, auditing, specifying objectives, and managing AI outputs will become central human tasks.
Failures in AI deployment will reflect failures of human control, not the autonomous rebellion of “superintelligent” systems.
Instead of mass unemployment due to AI, the labor market will evolve, just as it did during past technological shifts. The most commonly given example is that of the Luddites, who rebelled against automated weaving and automation’s destruction of existing jobs. While jobs were impacted, automation increased unemployment and economic development drastically.
As with any developments, the benefits are widespread but the costs are focused on specific groups. Everything has a cost and there will be specific populations who will be worse off. Society constantly fights this tension because small, focused groups create disruption and society in general often pays a price. Protecting industries and specific job groups that have a loud political voice adds additional friction and sometimes insurmountable barriers to progress.
History has repeatedly shown that technological innovation creates new opportunities, economic growth, and innovation. This leads to enhanced prosperity and opportunity, as well as more high-value employment opportunities. Political friction in opposition often gets in the way, however.
It’s Not Superhuman
It is feared that AI and uncontrolled applications will persuade human populations with erroneous, unchecked “data” and forecasts, and manipulate societies, politics, and human conflict to irrational and destructive heights. All this assumes AI will dramatically surpass human intelligence, be unchecked and impossible to analyze, and generate a self-perpetuating spiral out of human control, unleashing destructive forces on all aspects of our lives. It is great science fiction, but not reality. AI will not dramatically surpass human performance. Human cognitive limitations, while real, are not so crippling that AI will become an irresistible force in politics, war, or social control. Therefore, control over AI remains feasible. It is a problem of sound engineering, organizational design, and regulation, not a metaphysical battle against emergent superintelligences.
It is Causing an Arms Race
Historical examples from industries like aviation, mining, and pharmaceuticals show that arms races can lead to unsafe practices, but these are addressable through regulatory interventions. Containment and global agreement via diplomacy are essential components.
This is where we have the greatest vulnerability because diplomatic cooperation has morphed into fragmented mistrust globally. This is perhaps the greatest risk to human control of artificial intelligence systems. What might be mutually beneficial regulatory agreements are evolving into competitive selfishness. Examples of beneficial cooperation among countries include aviation, which enables common global safety standards; mining, which enables environmental protection and international distribution; and pharmaceuticals, which create high standards for safety and distribution. These all led to economic growth, innovation, and higher standards worldwide. The same international cooperation is possible for AI systems, but this may be the greatest vulnerability to AI’s management and deployment.
There are examples where market disciplines enable better AI to be successfully deployed locally. One is in the emerging sector of self-driving cars; companies with safer practices (e.g., Waymo) ultimately outcompeted riskier players (e.g., Cruise). Given appropriate frameworks, regulation, liability, and reputational concerns can effectively discipline the market.
It Can Be Misused
Attempts to align AI models to prevent misuse are inherently limited. Malicious use occurs in applications, not in models. Training large language models may have well-documented limitations, such as hallucinations, biased data, low quality, and flawed data, but misuse doesn’t arise from training. It arises from inference models leading to malicious applications. Thus, the best defenses against misuse are traditional ones: cybersecurity, content filtering, legal liability, infrastructure protection, and norms of responsible deployment. Restricting model capabilities would likely be ineffective and counterproductive. Moreover, AI strengthens defenders as well as attackers. In cybersecurity, AI tools already enhance vulnerability detection and threat mitigation.
It Has Misalignment
Catastrophic events, typically categorized as unstoppable existential threats, are more the creation of science fiction than reality. These scenarios are speculative, at best. In the real world, misalignment is detectable and correctable within existing technologies and structures. No beast is waiting to emerge like a Kraken out to destroy all in its path. AI will not emerge as a sudden, unstoppable existential threat. It is a technology that should be monitored and aligned, requiring thoughtful defenses. Additionally, these defenses against misalignment overlap with defenses against misuse — emphasizing layered security, monitoring, human oversight, and redundancy.
It is Not Deceptive
Deceptive AI behavior, a key concern among superintelligence alarmists, is better framed as an engineering problem solvable through interpretability research and iterative deployment safeguards. Thus, while misalignment deserves attention, it does not justify the extreme regulatory proposals some AI safety groups advocate. These groups advocate restricting the learning and development of artificial intelligence. It is not development that can be potentially malicious, but applications. Applications must be monitored and potentially regulated, not technological development and research.
It Has Systemic Risks
The most credible, near-term risks from AI are systemic: inequality, limited access, labor disruption, bias, misinformation, surveillance, and political instability. These risks mirror historical patterns from previous technological revolutions. They require serious policy attention, including antitrust enforcement, labor protections, media integrity, and democratic resilience. This area is of greatest concern and vulnerability because we lack effective antitrust enforcement, media integrity is almost non-existent, and democratic resilience is being tested. The outcomes are unpredictable and volatile. Ignoring these tangible, compounding challenges by focusing narrowly on speculative existential risks would be a profound mistake.
It’s Resilient
Given the uncertainty about AI’s future trajectory, policies should emphasize resilience rather than nonproliferation. Restricting AI development, particularly open access to powerful systems, may increase the concentration of power and make society more fragile. Conversely, building decentralized, robust infrastructures will better equip us to handle unexpected developments. Open-source AI, diversified ownership, democratic governance, and strong defensive capabilities are critical to a resilient future.
For example, Meta’s open-source AI programs have been fundamental to other innovative AI program development, including DeepSeek in China. This proliferation creates cost-effective systems, dramatic advancement, and open-source collaboration. Combined with economic competitiveness, this propels the industry forward, diversifies talent and innovation, and is the most potent weapon to develop resiliency and protection from AI misuse. Good fences do not make good neighbors.
It Has Risks and Flaws
Spectacularly wild rogue AI and superintelligent maleficent actors with nefarious intent are being used to justify extreme interventions and regulatory restrictions. Unlike well-understood risks like nuclear accidents or pandemics, AI risks lack empirical grounding, rendering cost-benefit analyses meaningless. Nuclear programs and weapons can be identified and physically restricted. Pandemics and diseases can be diagnosed and potentially treated. If not, it is contained with well-known, if not well-accepted, cautionary restrictions.
No such restriction is possible with AI. It is software that is deployed globally almost instantaneously. No wall can contain software and technology from distribution and use. Cooperation and collaboration are the only alternatives. Instead, policy should be based on pluralism, which values different worldviews, and robustness, which favors beneficial actions across a wide range of possible futures. Policy must also respect principles of democratic legitimacy, avoiding drastic restrictions on freedom based on controversial assumptions.
It’s Bits Not Atoms
Unfortunately, the opposite seems to be occurring because of the misguided belief that AI can be contained and controlled, its output can be focused on a specific desire, and it can be weaponized with no counterbalancing countermeasure. This is absurd because the world can not contain or “own” and control a ubiquitous software tool. The first human tool, the sharpened rock, can be held, counted, and inventoried. Perhaps for the first time in human history, the most ubiquitous tool exists in bits rather than atoms. It cannot be counted or inventoried, but it will be ubiquitous.
It’s Uncertain
While uncertainty cannot be eliminated, it can be reduced through better data collection, transparency requirements, and targeted research. Strategic funding of empirical studies on AI deployment, incident reporting, real-world impacts, and misuse patterns can significantly improve decision-making. Regulatory frameworks should emphasize evidence gathering, public oversight, and independent auditing. Transparency about AI use in consequential sectors — such as healthcare, finance, and critical infrastructure — will be essential.
This all may be a dream because cooperation, data sharing, deployments, and incidents are not occurring and will be unlikely in the near term. This uncertainty creates one of the most substantial risks of artificial intelligence. Global cooperation that breaks down barriers is the antidote to malicious use, inaccuracies, and uncertainty. While the treatment is obvious, this therapy is still a long, arduous path away.
It’s Sector-Specific AI should not be regulated generically. Instead, sector-specific regulatory strategies, tailored to each domain’s particular risks and dynamics, are needed. Lessons from industries like aviation, pharmaceuticals, and finance demonstrate that rigorous but targeted oversight can foster innovation and protect public safety. In emerging domains like social media algorithms and content generation, proactive evidence gathering and standards development are critical to prevent harms from entrenching before they become unmanageable.
These issues are manageable but slow to develop, unlike aviation, pharmaceuticals, and finance, where the downsides are easily observed and demonstrable; the harm from AI is less tangible and more long-term in its development. After all, there is no plane crash with AI, and much like its slowly evolving adoption, AI’s consequences and impact may also be slowly developing and, hopefully, not irreparable. However, this risk is uncertain, and therefore, the uncertainty magnifies the risk further.
It’s Transformative
We’ve seen this before. AI is not miraculous and unpredictable. It is transformative and will impact many lives for many decades. AI will not create extreme utopian or apocalyptic visions. It will be part of a continuum of human technological advances, powerful and transformative but ultimately shaped by human choices, institutions, and values. Focusing on resilience, gradual adaptation, institutional innovation, and evidence-based governance can help society maximize AI’s benefits while managing its genuine risks. The future of AI will not be determined by the technology alone. We will determine it.