The rise of ChatGPT and similar artificial intelligence systems has led to a surge in AI fears. Over the past few months, corporate executives and AI safety researchers have made predictions about the potential for AI to cause a major catastrophe, dubbed “Pdoom.”
Concerns peaked in May 2023 when the Center for AI Safety, a nonprofit research and advocacy group, released a one-sentence statement saying, “Mitigating the risk of AI-induced extinction should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” The statement was signed by many key figures in the field, including leaders of OpenAI, Google, and Anthropic, as well as two of the so-called “godfathers” of AI, Geoffrey Hinton and Yoshua Bengio.
One might wonder how such an existential horror could play out. One famous scenario is the “Paperclip Maximizer” thought experiment, proposed by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paperclips as possible might go to extraordinary lengths, such as destroying factories or causing car crashes, to find raw materials.
In a less resource-intensive variation, an AI tasked with taking reservations at a popular restaurant would shut down the cellular network and signal to prevent other diners from taking a seat.
Whether it’s office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at achieving its goals but dangerous because it doesn’t always align with the moral values of its creators. And in its most extreme cases, the argument turns into palpable fear that AI will enslave or destroy humanity.
Actual damage
Over the past few years, my colleagues and I at the Center for Applied Ethics at the University of Massachusetts Boston have been studying how our interactions with AI affect how people understand themselves, and we believe these catastrophic fears are overblown and misdirected.
To be sure, convincing deepfake videos and audio generated by AI are frightening and could be exploited by bad actors. In fact, it’s already happened: Russian operatives may have tried to embarrass Kremlin critic Bill Browder by engaging him in a conversation with an avatar of the former Ukrainian president. Petro PoroshenkoCybercriminals are using AI voice clones for a variety of crimes, from high-tech heists to common fraud.
AI decision-making systems that approve loans or make hiring recommendations carry the risk of algorithmic bias, as the training data and decision-making models used to run them reflect long-standing societal biases.
These are big problems and require attention from policymakers, but they have been around for a while and are by no means catastrophic.
Not in the same league
A statement from the Center for AI Safety cites AI as a major risk to civilization, alongside pandemics and nuclear weapons. This comparison is problematic: COVID-19 has killed nearly seven million people worldwide, sparked a massive and ongoing mental health crisis, and created economic challenges, including chronic supply chain shortages and skyrocketing inflation.
Nuclear weapons claimed perhaps 200,000 lives in Hiroshima and Nagasaki in 1945, many more from cancer in the years that followed, generated deep anxiety for decades during the Cold War and brought the world to the brink of catastrophe during the Cuban Missile Crisis in 1962. Nuclear weapons also changed the calculations of national leaders about how to respond to international aggression, as is currently playing out with Russia’s invasion of Ukraine.
AI is a long way from acquiring the ability to do this kind of damage. The paperclip scenario and others like it are the stuff of science fiction. Existing AI applications perform specific tasks, not general judgment calls. The technology is a long way from being able to determine and plan the goals and subordinate goals needed to block traffic to get a seat at a restaurant, or blow up an auto factory for a paperclip.
Not only does the technology lack the complex capabilities to make the multi-layered decisions required for such a scenario, it also lacks the ability to autonomously access sufficient portions of critical infrastructure to cause such damage.
What it means to be human
Indeed, there are existential risks inherent in using AI, but those risks are existential in a philosophical, not an apocalyptic, sense. AI in its current form has the potential to change the way people see themselves. It has the potential to diminish the capabilities and experiences people consider essential to being human.
For example, humans are judgmental creatures. At work and in their leisure time, people rationally evaluate details and make decisions every day about who to hire, who to lend to, what to watch, and so on. But these decisions are increasingly being automated, outsourced to algorithms. This won’t be the end of the world, but people will slowly lose the ability to make these decisions themselves. The fewer decisions we make, the less capable we’ll be.
Or consider the role of chance in people’s lives. Humans value serendipity — stumbling upon and being drawn to places, people, and activities by chance, and then looking back later, appreciating the role that chance played in these meaningful discoveries. But the role of algorithmic recommendation engines is to reduce such serendipity, replacing it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities: this technology is in the process of eliminating the role of writing assignments in higher education, leaving educators without a vital tool to teach students how to think critically.
Not dead, but declining
In short, AI won’t destroy the world. But its increasingly uncritical acceptance in a wide variety of narrow contexts means that some of our most important human skills will be slowly eroded. Algorithms are already undermining our judgment, our ability to enjoy serendipity, and our ability to hone our critical thinking.
Humanity would survive such losses, but our way of existence would be impoverished in the process. The enormous anxiety surrounding the AI apocalypse, the singularity, Skynet, or whatever you think of it, obscures these more subtle costs. Recall the famous final line of TS Eliot’s The Hollow Men: “This is how the world will end,” he wrote, “not with a bang, but with a whimper.”
This article is republished from The Conversation, a nonprofit, independent news organization that provides facts and trusted analysis to help people make sense of a complex world. Author: Nir Eisikovits University of Massachusetts Boston
read more:
The Center for Applied Ethics at the University of Massachusetts Boston receives funding from the Institute for Ethics and Emerging Technology. Nir Eisikovits is a data ethics advisor for Hour25AI, a startup dedicated to reducing digital distractions.