A version of this article appeared in Issue 19 of Word on Fire’s Evangelization & Culture journal on artificial intelligence, available here.
I have a confession to make: I’m not afraid of artificial intelligence. But maybe I should be.
With artificial intelligence (AI), humans have given machines the ability not only to do what they are programmed to do, but also to learn to act ever more effectively, increasing programming them with new abilities and strategies that humans can neither predict nor control.
It is, At first glancesomething to fear. But as a man born in 1969, I’ve spent my entire life in existential fear of forces I can neither understand nor control – and it’s starting to get a little tiresome.
I myself have joined my fellow Americans in fear of world famine, fear of a new ice age, fear of Japanese economic domination, fear of Chinese economic domination, fear of economic collapse, and fear of terrorists. The world is discovering new things to be afraid of, but I see that there is nothing new under my sun.
I hear concerns about political extremism, but I remember watching Patty Hearst on the evening news. My kids have seen UFOs with their own eyes on YouTube, but I once saw UFOs with my own eyes on the daily news. Critics talk about the credible threat of artificial intelligence disasters, but I survived the credible threat of the Y2K disaster.
Not all of these fears were unfounded; some of them were completely false; and our fears actually helped us put up safeguards and move forward. That’s how it’s always been with technology.
Every technology has its pros and cons, but the Church reacts in kind: it embraces them. So the first printing press was Gutenberg’s Bible, the Catholic Vulgate. But then, after Martin Luther posted his 95 Theses the old-fashioned way on the church door, the printing press made it the first message to “go viral.”
In 1931, Guglielmo Marconi had Pius XI speak on the radio, introducing the broadcast with what may be the Church’s technological mission statement: “With the help of God, who places so many mysterious forces of nature at man’s disposal, I have been able to prepare this instrument that will give the faithful throughout the world the joy of listening to the voice of the Holy Father.” The Church continued to use the mysterious technological forces of the phonograph, cinema, television, CDs, and the Internet to offer the world this joy again and again for the next 100 years.
I am a man of the Church, so I embrace new technologies and refuse to join the fearful in their bunkers. In my opinion, this is a case of “Fool me once, shame on you. Fool me with every dystopian movie for decades, with every election since Reagan, with every recession since Carter, and with a lifetime of unspeakable terror of modernity, then, well, shame on you.” Me.”
The boy cried wolf my whole life and I got used to the sound of his voice.
But then I remember something deeply disturbing: what makes “The Boy Who Cried Wolf” such a compelling story is not that the boy was repeatedly wrong; it’s that the last time he spoke, he was RIGHT.
So, is he right this time?
The Gorilla Problem
What are we afraid of about AI? I think we fear what a chess piece would fear if it could.
In The era of AI, Henry Kissinger and his co-authors describe how AlphaZero beat Stockfish at chess in 2017. Stockfish is an old-school computer chess opponent: programmers fed the best of human strategy into a machine that could recall the best moves of all time in an instant. AlphaZero, developed by Google’s DeepThink, was not given any information about human strategy. It was simply given the rules and objective of the game.
After just four hours of training playing games against itself, AlphaZero beat Stockfish 155 games to 6, with 1 draw. how It was frightening to see that this was a victory. AlphaZero had sacrificed its most valuable pieces—including its queen—to attack its enemy with a cold efficiency greater than any human mind could ever have imagined.
“Chess has been shaken to its roots,” grandmaster Garry Kasparov said after the game. Kissinger and his team fear that “world security and order” will also be “shaken to its roots.” AI’s unique capabilities mean that “delegation of critical decisions to machines may become inevitable.” And if that happens, what precious knights and queens will AI sacrifice to achieve its goals?
Artificial intelligence entrepreneur Mustafa Suleyman in his book The wave that is coming, fears that his own companies, DeepMind and Inflection AI, could participate in the unwitting rise of a new kind of superpower.
He imagines a future where “anyone with a college education in biology or an enthusiasm for self-directed online learning” could acquire a DNA synthesizer and “create new pathogens that are far more transmissible and deadly than anything found in nature.” Other malicious actors may go beyond “garage tinkerers” who use AI technologies as weapons in ways we literally cannot imagine.
A tsunami of AI applications, he says, will wipe out our conventional wisdom, as well as our safety and security. In fact, “garage tinkerers” and malicious actors may be better positioned to make AI breakthroughs than bureaucracies floundering in legal constraints and due diligence. Suleyman fears a colossal shift in power, a rapid “hyper-evolution” of AI capabilities, an endless acceleration of AI applications toward “ubiquity,” and ultimately asks, “Will humans be in the loop?”
“Throughout history, technology has been just a tool,” Suleyman said. “But what if the tool came to life?” We would then face the “gorilla problem”: in the same way that weaker humans put more powerful animals in zoos, AI “could mean that humanity will no longer be at the top of the food chain.”
Descent into Egypt
I asked Dr. Charles Sprouse of the School of Engineering at Benedictine College in Kansas, where I work, about fears about AI. He gave me a remarkable list that proves that fear, like politics, is both global and local.
Sure, we fear guns, drones, and AI robots that hunt and kill with superhuman strength and prowess. But we also fear autonomous vehicles: what decisions will they make, and what malfunctions will change those decisions?
We also fear “fake news” on steroids, as clever programmers with dubious agendas mislead the masses with politically charged deep fakes. But we should also fear fake communications: Once I start using the capabilities of the metaverse to chat in virtual reality with my wife, how can I be sure that it is really my wife I am talking to?
We fear government surveillance by machines that can recognize our faces, bodies, and gaits, and monitor what we do in our gardens. But we should also fear corporate AIs that know what we like to eat and how much, where we hang out and how often, and what we think about when we’re online.
Many of us fear that technology will take our jobs: Writers, legal professionals, and teachers fear ChatGPT, but software designers, drug researchers, and lab technicians have equally powerful tools to fear.
All these fears seem (at first glance) very new, different from the old ones. But are they really?
So we fear the monstrous AI or the master AI – a Terminator that doesn’t and can’t care about what gets in its way, or a Matrix that enslaves us for its own ends. AI could take away our autonomy, our freedom, our chosen lifestyle, and our privacy – or it could wipe out civilization as we know it.
But is this really a new type of fear?
In fact, AI is more like a throwback to the slave masters of Egypt, back when “a new king arose over Egypt, who did not know Joseph. He said to his people, ‘Behold, the people of Israel are more numerous and stronger than we are. Come, let us deal wisely with them’” (Exodus 1:8-10). And while we fear robot drones, if you remember the Old Testament, entire tribes were wiped off the map with impunity at that time as well.
It would be the height of irony if all our ingenuity, divorced from God, had merely constructed a new and greater slave master: an artificial pharaoh enlisting us in a vast exercise of building pyramid monuments to Mammon, in a project that none of us can imagine because its scope is too great for any single human mind to assimilate.
But maybe that’s not the real fear after all.
The real monster is loneliness
I started by saying that I wasn’t afraid of artificial intelligence, and I’m not. At least, not in the way I described it. One thing I’ve learned in my lifetime of using new technologies is that we’re always afraid of the wrong thing.
Perhaps what we should really fear is what Sigmund Freud describes in Civilization and its discontents. He wrote:
“If there had been no railway to conquer distances, my child would never have left his native town, and I should not need the telephone to hear his voice; if ocean travel by boat had not been introduced, my friend would not have embarked on his sea voyage, and I should not need a cable to allay my anxiety about him.”
We feared the dire consequences of each of these technologies—everything except the far worse consequence that each of them brought us: loneliness.
And that’s what we should fear most about AI: a world in which we become even more separated from what makes us human: each other.
Image: Bua Noi, B20180