Nicholas Mitsakos

When Theory Becomes Reality

Stunning capabilities are emerging from large language models like GPT 4 that, until very recently, were thought to be only theoretical. We could never have the data sets or the processing power to generate real and usable results. Well, all that has changed – rather suddenly.

But is it time for the torches and pitchforks? What are the serious risks that accompany this technology?

Artificial Intelligence as a Tool or a Master?

Machines are on track to be a lot smarter than just about anyone imagined, even leaders in artificial intelligence (such as Geoffrey Hinton – an AI pioneer who recently left Google). He fears that it will be hard to prevent bad actors from using powerful new artificial intelligence tools to do very bad things. He feels something dangerous is being released into the wild.

But, while people have always done bad things with useful tools, is it a risk to humanity? Undoubtedly, AI systems are increasingly dangerous, but also this vast power makes them incredibly useful as tools that can serve society beyond anything previously imaginable.

It’s Hypothetical and Inevitable

The enormous threats and the danger outlined by people like Prof. Hinton may be more theoretical than realistic. Another thing that is certainly theoretical is that that can be global cooperation that can manage and regulate AI’s applications. That is simply not going to happen.

The genie is out of the bottle. So now what? We have a powerful tool, and as I mentioned in a previous article, AI is the “sharpened rock” of the modern era – it is a tool enabling humanity to create extraordinary new applications, designs, products, and services. That is going to happen, and risks, falsehoods, misuse, and danger will come along. Neither can be prevented.

Learning and Rewiring

Allowing machines to learn by creating a “neural network” of multiple interconnections that can be revised, backpropagated, and reconnected, combined with large data sets and advanced processing power has created a neural network (“brain”) that can be rewired in real-time.

In other words, software that can learn. It is refined symbolic reasoning. It is not a biological brain and never will be. But, large language models are proving to have extraordinary learning algorithms, and the ability to rewire and reconnect, and then learn even more, refine its output in the constantly improving and delivering, at times, extraordinary results.

Now for the Tricky Part

Things can go very wrong. Large language models are a powerful tool, but Woody Guthrie said,

“Every tool is a weapon if you hold it right.”

While there is no question that machines will become smarter than humans, is this something to frighten us and prevent (if that is even possible now) or will it usher in a new renaissance for humanity? Machines may be smart, but does that mean they will dominate humanity?

We’ve Done This Before…

Humanity is comfortable with very smart tools that move society forward and that progression will be accelerated with these new AI tools. But it won’t be benign or without risk. Excessive fear should not paralyze reasonable debate, policy, and the advancements and progressive impact otherwise unattainable that AI creates.

…And We Haven’t

Artificial intelligence, especially large language models (Chat GPT) – hyped, overblown, and feared – is not well understood or managed.

The torches and pitchforks are out for what AI can do, and more for what it could do. There is a different way to think about AI – a way to create efficiency and value.

Too much is expected of, and feared from, current models. The reality is that too much is unknown, useful models are still challenged to scale more generally, and massive amounts of data, and computing power (and an enormous carbon footprint) are still relatively inefficient but, the trajectory is that smart useful machines driven by artificial intelligence software will bring humanity into a new era.

There will be good and bad, like every new era. Will it be the Middle Ages all over again and we’ll experience The Plague before the Renaissance, or will it be more balanced and reasonable? There are good, bad, and many things in between whenever humanity advances.

Let it happen. Put down the pitchforks.

 

Share This