An Optimist’s Vision of AI
As the era of artificial intelligence is here, it’s easy to fall into the trap of despair and fear over the loss of control and the worry that artificial intelligence is about to unleash killer robots and enslave humanity.
Either that, or Artificial intelligence will improve lives, expand access to education, advance healthcare, and advance climate science, among many other improvements.
Luckily, AI’s benefits greatly outweigh its costs. Nothing is free, and everything comes with a price (there are always both sides to the ledger), but the extraordinary benefits that artificial intelligence can unleash are worth the effort. It would be a mistake to slow down, pause, or restrict research, development, and AI applications.
AI will not destroy the world — and it is more likely to save it.
What is AI Anyway?
AI is the application of mathematics and software code to datasets (which can be quite extensive, via large language models or specific to organizations and individuals) to teach computers to understand, synthesize, and generate knowledge.
AI is a computer program. It takes input, processes, and generates output. AI’s unique characteristic is that its output is useful across a wide range of fields, from research to data analysis to medicine to the creative arts — and even to coding itself. Importantly, it is owned and controlled by people (and not Skynet), like any other technology.
AI isn’t sentient. It can never be killer robots with emotional needs that aspire to survival or domination and enslavement of humankind. It will not, contrary to the best sci-fi novels, uncontrollably murder all humans.
AI could help make many things better.
Why AI Can Make Things Better
Human intelligence makes life better. Any reasonable assessment of humankind’s progress shows that our significant advancements have come from within, through hard work and fundamental intelligence applied to what often seem like intractable problems. Human intelligence has enabled scientific advancements, academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision-making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction. Collectively, our work, study, pursuit, and diligence have improved human lives.
Further, human intelligence created our modern world: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, and morality. Without the application of intelligence in all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming.
We have used our intelligence to raise our standard of living to extraordinary levels over the last few thousand years, accelerating dramatically over the past 200 or so years. While these benefits are shared unequally, and there have been high costs, whether it’s to the climate, indigenous peoples, or lost cultures, the net benefit is still extraordinary.
Now for the AI’s Clever Bit…
AI is not intelligence, but it augments human intelligence and creates powerful new tools and opportunities.
AI offers the opportunity to profoundly impact scientific research, materials science, the development of new medicines, the creation of new technologies for many applications, and the finding of ways to solve climate change more effectively and efficiently.
AI intelligence augmentation has started. AI already permeates many aspects of our computer- and machine-based interactions and is accelerating at an almost unimaginable pace through Large Language Models like ChatGPT, and will continue this trajectory exponentially—if we let it.
…And We’re Better for It
In so many areas, and these are just a few examples.
I can cheat, or I can learn.
- Education: Students can have an AI tutor that is infinitely patient, compassionate, knowledgeable, and helpful. Using AI to augment education will disrupt an anachronistic learning system and enable it to focus on each student’s development, abilities, and understanding. It can be a much more effective way to maximize potential without worrying about a pre-structured schedule and plan.
- Science: Every scientist will have an AI assistant/collaborator/partner that will significantly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.
- Research: Scientific breakthroughs, new technologies, and medicines will expand dramatically as AI helps us further decode the laws of nature and harness them for our benefit.
- Productivity: Productivity growth throughout the economy will accelerate dramatically, driving economic growth, the creation of new industries and jobs, and wage growth, resulting in a new era of heightened material prosperity across the planet.
- Creativity: The creative arts will enter a golden age as AI-augmented artists, musicians, writers, and filmmakers can realize their visions faster and on a larger scale than ever before.
- Leadership: Every leader of people — CEO, government official, nonprofit president, athletic coach, teacher — will have the same. The magnification effect of better decisions by leaders on the people they lead is enormous, so that this intelligence augmentation may be the most important of all.
- Personalized Help: AI offers the opportunity to be a personal, patient collaborator, tutor, mentor, coach, and assistant. Seamless, continuous access can be an enormous help in overcoming challenges when developing opportunities.
- Less Horror and Destruction: AI will improve warfare when it must happen by dramatically reducing wartime death rates. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by minimal human leaders. Now, military commanders and political leaders will have AI advisors to help them make better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
- Intelligence: In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.
This isn’t just about intelligence. Perhaps the most underestimated quality of AI is its humanizing effect. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. AI medical chatbots are already more compassionate than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and friendlier.
The stakes here are high. The opportunities are profound. AI is quite possibly the most important — and best — thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.
The development and proliferation of AI — far from a risk to fear — are a moral obligation to ourselves, our children, and our future.
We should be living in a much better world with AI, and we can now.
So, Why the Panic?
In contrast to this optimistic view, the public conversation about AI is currently riddled with hysterical fear and paranoia.
We hear claims that AI will variously kill us all, ruin our society, take all our jobs, cause crippling inequality, and enable bad people to do awful things.
What explains this divergence in potential outcomes from near utopia to horrifying dystopia?
Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic — a social contagion that convinces people the latest technology is going to destroy the world, or society, or both. Documenting these technology-driven moral panics over the decades, their history makes the pattern vividly clear. It turns out this current panic is not even the first for AI.
Now, it is undoubtedly the case that many new technologies have led to bad outcomes, often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.
But a moral panic is, by its very nature, irrational — it takes what may be a legitimate concern. It inflates it into a level of hysteria that, ironically, makes it harder to confront actual serious concerns.
We have a moral panic about AI right now.
A variety of actors are already using this moral panic to demand policy action—new AI restrictions, regulations, and laws. These actors, who make theatrical public statements about the dangers of AI—feeding on and further inflaming moral panic—all present themselves as selfless champions of the public good.
But are they? And are they right or wrong?
The Baptists and Bootleggers Of AI
Economists have observed a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories — “Baptists” and “Bootleggers” — drawing on the historical example of the prohibition of alcohol in the United States in the 1920s:
“Baptists” are the true believer social reformers who legitimately feel — deeply and emotionally, if not rationally — that new restrictions, regulations, and laws are required to prevent societal disaster. For alcohol prohibition, these actors were often literally devout Christians who felt that alcohol was destroying the moral fabric of society.
For AI risk, these actors are true believers that AI presents one or another existential risk.
“Bootleggers” are the self-interested opportunists who stand to financially profit from the imposition of new restrictions, regulations, and laws that insulate them from competitors. For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned.
For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startups and open-source competition — the software version of “too big to fail” banks.
A cynic would suggest that some of the apparent Baptists are also Bootleggers — specifically the ones paid to attack AI by their universities, think tanks, activist groups, and media outlets. If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.
The Bootleggers Win
The problem with the Bootleggers is that they win. The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want — regulatory capture, insulation from competition, the formation of a cartel — and the Baptists are left wondering where their drive for social improvement went so wrong.
We just lived through a stunning example of this — banking reform after the 2008 global financial crisis. The Baptists told us that we needed new laws and regulations to break up the “too big to fail” banks to prevent such a crisis from ever happening again. So Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal but was, in reality, co-opted by the Bootleggers—the big banks. The result is that the same banks that were “too big to fail” in 2008 are now much, much larger.
So in practice, even when the Baptists are genuine — and even when the Baptists are right — they are used as cover by manipulative and venal Bootleggers to benefit themselves.
And this is what is happening in the current AI regulation drive.
However, it isn’t sufficient to identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the Bootleggers on their merits.
Will AI Kill Us All?
The first and original AI doomer risk is that AI will decide to literally kill humanity.
The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth — Prometheus brought the destructive power of fire (and, more generally, technology, “techne” ) to humans, for which Prometheus was condemned to perpetual torture by the gods.
Later, Mary Shelley gave us our own version of this myth in her novel Frankenstein, or The Modern Prometheus, in which we develop technology for eternal life, which then rises up to destroy us. And of course, no AI-panic newspaper story is complete without a still image of a gleaming, red-eyed killer robot from James Cameron’s Terminator films.
The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies — fire, after all, can indeed be used to burn down entire cities. But just as fire was the foundation of modern civilization, used to keep us warm and safe in a cold, hostile world, this mythology ignores the far greater upside.
New technologies inflame destructive emotion rather than reasoned analysis. Just because premodern humans freaked out doesn’t mean we have to; we can apply rationality instead.
It’s Not a Sentient Enemy
The idea that AI will decide to literally kill humanity is a profound error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest. It is math — code—computers, built by people, owned by people, used by people, controlled by people. The idea that it will, at some point, develop a mind of its own and decide that it has motivations that lead it to try to kill us is superstitious.
In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is software running on a machine. It is not going to come alive any more than your toaster will.
Now, obviously, there are true believers in killer AI who are suddenly receiving stratospheric media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now terrified by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are advocating a range of bizarre and extreme restrictions on AI, from banning AI development to shutting down data centers. They argue that because people like me cannot rule out future catastrophic consequences of AI, we must assume a precautionary stance that may require large amounts of physical violence and death to prevent potential existential risk.
Non-scientific Objections
What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!”
Specifically, three things are going on:
1. As John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role in creating nuclear weapons, which helped end World War II and prevent World War III, with, “Some people confess guilt to claim credit for the sin.”
What is the most dramatic way one can claim credit for the importance of one’s work without sounding overtly boastful? This explains the mismatch between the words and actions of those building and funding AI — watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.”)
2. There is a profession of “AI safety expert”, “AI ethicist”, and “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.
3. AI doom cults. Many are harmless, but some are influential, dangerous, and immune to rational discussion.
“AI risk” has developed into a cult that has suddenly emerged into the daylight of global press attention and public conversation. This cult has pulled in not just fringe characters but also some actual industry experts and a not-insignificant number of wealthy donors. It’s developed a full panoply of cult behaviors and beliefs.
AI risk doomers sound extreme — it’s not that they have secret knowledge that makes their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extreme.
Cults are fun to hear about; their written material is often creative and fascinating, and their members are engaging at dinner parties and on TV. But their extreme beliefs should not determine the future of laws and society.
Murder Robots and Hate Speech
The second widely mooted AI risk is that AI will ruin our society by generating outputs so “harmful” that they cause profound damage to humanity, even if we’re not literally killed.
The terminology of AI risk recently shifted from “AI safety”—the term used by people who worry that AI will literally kill us—to “AI alignment”—the term used by people concerned about societal “harms”. The tipoff to the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? That’s where things get tricky.
Here’s an analogous situation — the social media “trust and safety” wars. As is now apparent, social media services have long faced massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content. The same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”.
Thought Police?
On the one hand, there is no absolutist position on free speech. First, every country, including the United States, makes at least some content illegal. Second, there are certain kinds of content, like child pornography and incitements to real-world violence, that are nearly universally agreed to be off limits — legal or not — by virtually every society. Any technological platform that facilitates or generates content—such as speech—will have some restrictions.
On the other hand, the slippery slope is not a fallacy; it’s an inevitability. Once a framework for restricting even egregiously terrible content is in place — for example, for hate speech, a specific hurtful word, or for misinformation and obviously false claims — a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences.
This dynamic has now formed around “AI alignment.” Its proponents claim the wisdom to engineer AI-generated speech and thoughts that benefit society and to ban those that are harmful. Its opponents argue that the thought police are breathtakingly arrogant and presumptuous — and often outright criminal — and are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984.
If you disagree with the prevailing niche morality being imposed on both social media and AI through ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important than the fight over social media censorship.
AI is highly likely to be the control layer for everything in the world. How it is allowed to operate will matter perhaps more than anything else. You should be aware of how a small and isolated coterie of partisan social engineers is trying to determine that right now, under the cover of the age-old claim that they are protecting you.
Don’t let the thought police suppress AI.
Will AI Take All Our Jobs?
The fear of job loss due to various factors, such as mechanization, automation, computerization, or AI, has been a recurring panic for hundreds of years, since the advent of machinery such as the mechanical loom (remember “the Luddites”).
Even though every new major technology has led to more jobs at higher wages throughout history, each wave of this panic is accompanied by claims that “this time is different”— that this is the time it will finally happen, that this is the technology that will finally deliver the hammer blow to human labor. Yet, it never happens.
We’ve been through two such technology-driven unemployment panic cycles in recent history — the outsourcing panic of the 2000s and the automation panic of the 2010s. Notwithstanding many talking heads, pundits, and even tech industry executives pounding the table throughout both decades when mass unemployment was near, by late 2019 — right before the onset of COVID — the world had more jobs at higher wages than ever in history.
Nevertheless, this mistaken idea is back.
This time, we finally have the technology that will take all the jobs and render human workers superfluous — real AI. Surely this time, history won’t repeat itself, and AI won’t cause mass unemployment, nor rapid economic, job, and wage growth, right?
No, that’s not going to happen — and in fact, AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth — the exact opposite of the fear.
Lump of Labor, Ideas, and Creativity
The core mistake the automation-kills-jobs doomers keep making is known as the Lump of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor (new ideas and creativity should be included in this mix — these are potentially unlimited but treated as if nothing new is ever created from our current status). The belief is that this fixed amount is either done by machines or humans; if machines do it, there will be no work for people to do.
Nonsense
The Lump of Labor Fallacy follows naturally from naive intuition, but that intuition is wrong. When technology is applied in production, we see productivity growth—an increase in output from a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things.
This increases demand in the economy, which drives the creation of new production, including new products and new industries, that then creates new jobs for people who were displaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.
This also leads to higher wages because, at the individual worker level, the marketplace sets compensation as a function of the worker’s marginal productivity. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay the worker more because he is now more productive, or another employer will. The result is that technology introduced into an industry generally not only increases the number of jobs but also raises wages.
Technology empowers people to be more productive. This causes the prices of existing goods and services to fall and wages to rise. This, in turn, drives economic and job growth while motivating the creation of new jobs and industries. If a market economy is allowed to function normally and technology is introduced freely, it becomes a perpetual upward cycle. As Milton Friedman observed, “Human wants, and needs are endless” — we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never quite all the way.
Technology doesn’t destroy jobs and never will.
It’s Different — Not
But this time is different. This time, with AI, we have technology that can replace ALL human labor… don’t we?
What it would mean for literally all existing human labor to be replaced by machines.
It would mean an economic productivity growth rate that would be absolutely stratospheric, far beyond any historical precedent. Prices of existing goods and services would drop substantially. Consumer welfare would skyrocket. Consumer spending power would skyrocket. New demand in the economy would explode. Entrepreneurs would create dizzying arrays of new industries, products, and services and employ as many people and as much AI as they could, as fast as possible, to meet all the new demand.
Suppose AI once again replaces that labor? The cycle would repeat, driving consumer welfare, economic growth, and job and wage growth even higher. It would be a straight spiral toward a material utopia that neither Adam Smith nor Karl Marx ever dared to dream of.
We should be so lucky.
Will AI Lead to Crippling Inequality?
Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk: what if AI does take all the jobs, for better or worse? Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and everyone else gets nothing?
As it happens, this was a central claim of Marxism, that the owners of the means of production — the bourgeoisie — would inevitably steal all societal wealth from the people who do the actual work — the proletariat. This is another fallacy that will not die, no matter how often reality disproves it.
The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself. Whether it’s the electrical grid, the automobile, the telephone, or any invention. The owner of that technology wants it distributed to as many people as possible, creating the largest market and capturing the most value.
It’s in your own interest to sell it to as many customers as possible. The largest market for any product is the entire world. Every new technology — even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers — rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet.
It’s for Everyone
Ultimately, all development and technology is intended for everyone: electricity, radio, cars, computers, the Internet, mobile phones, and search engines. The makers of such technologies are highly motivated to drive down their prices until everyone can afford them.
This is already happening in AI — it’s why you can use state-of-the-art generative AI not just at low cost but even for free today in the form of ChatGPT, Microsoft CoPilot, and Google Gemini, and others — and it is what will continue to happen. Not because such vendors are foolish or generous but precisely because they are greedy — they want to maximize the size of their market, which maximizes their profits.
What happens is the opposite of technology driving the centralization of wealth: individual customers of the technology capture most of the value it generates. As with prior technologies, the companies that build AI in a free market will compete furiously to make this happen.
Marx was wrong then, and he’s wrong now.
This is not to say that inequality is not an issue in our society. It’s just not being driven by technology; it’s being driven by the reverse, by the sectors of the economy that are the most resistant to new technology, that have the most government intervention to prevent the adoption of new technology like AI — specifically housing, education, and health care.
The actual risk of AI and inequality is not that AI will cause more inequality, but instead that we will not allow AI to be used to reduce inequality.
Bad People Doing Bad Things
AI will not come to life and kill us, AI will not ruin our society, AI will not cause mass unemployment, and AI will not cause a ruinous increase in inequality. But AI will make it easier for bad people to do bad things.
In some sense, this is a tautology. Technology is a tool. Tools, starting with fire and sharpened rocks, can be used for good — cooking food and building houses — and for bad — burning people and making weapons to kill them. Any technology can be used for good or bad.
AI will make it easier for criminals, terrorists, and hostile governments to do bad things.
Why not ban AI? Unfortunately, it cannot be contained because it is not containable. It is not a physical object; it is easily globally distributed. It’s just math and code.
AI will be as ubiquitous as electricity. The level of totalitarian oppression that would be required to arrest that would be so draconian — a world government monitoring and controlling all computers? jackbooted thugs in black helicopters seizing rogue GPUs? — that we would not have a society left to protect.
We Have Laws
First, we have laws on the books that criminalize most of the bad things people will do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can focus on preventing those crimes when we can, and prosecuting them when we cannot.
New laws are not required. There isn’t a single proposed bad use of AI that isn’t already illegal. If a new harmful use is identified, we ban it. QED.
Use AI as a defensive tool. Focus first on preventing AI-assisted crimes before they happen. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals — specifically the good guys whose job it is to prevent bad things from happening.
For example, if you are worried about AI generating fake people and videos, the answer is to build new systems that let people verify themselves and real content using cryptographic signatures. Digital creation and alteration of both real and fake content existed before AI; the answer is not to ban word processors, Photoshop, or AI, but to use technology to build a system that solves the problem.
Put AI to work in cyberdefense, biological defense, hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.
There are already many smart people in and out of government doing precisely this, of course — but if we apply all of the effort and brainpower that’s currently fixated on the futile prospect of banning AI to using AI to protect against bad people doing bad things, a world infused with AI will be much safer.
Now What?
A simple plan:
- Big AI companies should be allowed to build AI as fast and aggressively as they can — but not allowed to achieve regulatory capture, or to establish a government-protecting cartel that is insulated from market competition due to false claims about AI risk. This will maximize the technological and societal payoff from these companies’ amazing capabilities.
- Startup AI companies should be allowed to build AI as quickly and aggressively as possible. They should neither seek government-granted protection for big companies nor receive government assistance. They should be allowed to compete. If startups don’t succeed, their presence in the market will continue to motivate big companies to be their best — our economies and societies win either way.
- Open-source AI should be allowed to proliferate freely and compete with both large AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not outperform proprietary software, its widespread availability is a boon to students around the world who want to learn to build and use AI to help shape the technological future. It will ensure that AI is available to everyone who can benefit from it, no matter who they are or their resources.
- To mitigate the risk of malicious actors exploiting AI, governments partnering with the private sector should vigorously engage across all potential risk areas to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also to broader problems such as malnutrition, disease, and climate change. AI can be a potent tool for solving problems, and we should embrace it.
Legends and Heroes
The development of AI began in the 1940s, concurrent with the invention of the computer. The first scientific paper on neural networks — the architecture of the AI we have today — was published in 1943. Entire generations of AI scientists over the last 80 years were born, went to school, worked, and, in many cases, passed away without seeing the payoff we are receiving now. They are legends, everyone.
Today, growing legions of engineers — many of whom are young and may have grandparents or even great-grandparents involved in the creation of AI’s ideas — are working to make AI a reality, against a wall of fearmongering and doomerism that seeks to paint them as reckless villains. They are heroes, everyone.