- Problems with CrowdStrike software caused a global outage for Microsoft IT and disrupted many industries.
- Experts have warned that the incident highlights vulnerabilities in technology systems and potential risks from AI.
- Experts say government regulation and investment in security are key to preventing the risks.
The massive IT outage that has rocked businesses around the world has highlighted how deeply intertwined society and the systems that underpin it are with Big Tech, and how a single failure can cause widespread disruption.
It also exposes vulnerabilities in these systems, raising questions such as: Do big tech companies deserve our trust to adequately secure a technology as powerful as AI?
The software issue, which emerged during an update by cybersecurity firm CrowdStrike on Friday, caused outages for Microsoft IT and disrupted airlines, banks, retailers, emergency services and healthcare organizations around the world. CrowdStrike said a software fix had been deployed but many systems were still offline as of Friday and companies were struggling to bring services back online, some of which required manual updates.
Gary Marcus, an AI researcher and founder of Geometric Intelligence, a machine-learning AI startup that was acquired by Uber in 2016, told Business Insider that the Microsoft-CrowdStrike outage should serve as a “wake-up call” for consumers, and that the impact of a similar issue with AI would be 10 times greater.
“If a single bug can take down airlines, banks, retailers, media, etc., how can we possibly think we’re ready for AGI?” Marcus wrote in the X post.
AGI, also known as artificial general intelligence, is a term used to describe a version of AI that can achieve human capabilities like reasoning and judgment, and OpenAI co-founder John Shulman previously predicted it could happen within just a few years.
Marcus, a past critic of OpenAI, told BI that the current system could be problematic and that consumers are handing over enormous power to big tech companies and AI.
Dan O’Dowd, founder of safety advocacy group Project Dawn, which has campaigned against Tesla’s self-driving system, told BI that the CrowdStrike and Microsoft situations are a reminder that critical infrastructure is not secure or trustworthy enough. He said that because of the rush to get products to market, big tech companies evaluate systems based on whether they “work pretty well most of the time.”
When it comes to AI, some of this is already apparent.
Over the past six months, companies across the board have released a plethora of AI products and services, some of which are beginning to change the way people work, but along the way, hallucination-prone AI models have also churned out some high-profile errors, like Google’s AI Overviews telling users to eat pizza with glue on it, and inaccurate portrayals of historical figures.
Companies have also alternated between announcing flashy new products and then postponing or reversing them because they weren’t ready or problems emerged in the public release. OpenAI, Microsoft, Google and Adobe have all delayed or reversed AI product offerings this year as the AI race intensified.
While these mistakes and product delays may not seem like a big deal, the potential risks could become more serious as technology advances.
The US State Department commissioned a risk assessment report on AI, which was published earlier this year, saying there is a high risk that AI will be weaponized, which could take the form of biological weapons, large-scale cyber attacks, disinformation campaigns, or autonomous robots. This could lead to “catastrophic risks,” including the extinction of the human race, the report said.
Javad Abed, an assistant professor in the information systems department at Johns Hopkins University’s Carey School of Business, told Business Insider that incidents like the Microsoft-CrowdStrike outage continue to occur because companies still view cybersecurity as “a cost, not a necessary investment.” He said big tech companies should have alternative vendors and a defense-in-depth strategy.
“Investing an additional million dollars in such a critical aspect of cybersecurity is far more prudent than facing millions of dollars in losses later,” Abed said, “with the attendant damage to reputation and customer trust.”
According to a 2023 survey by the Brookings Institution, a nonprofit public policy institute, public trust in public institutions has steadily declined over the past five years. This decline in trust has been particularly pronounced in the technology sector. According to the Brookings Institution, major technology companies such as Facebook, Amazon, and Google have seen the steepest declines in trust, with trust ratings dropping by an average of 13 to 18 percent.
That trust will continue to be tested as both consumers and corporate employees affected by IT outages face the reality that a botched software update could bring things to a screeching halt.
Sanjay Patnaik, director of the Brookings Institution, told BI that governments have failed to adequately regulate social media and AI, saying the technology could pose a national security threat if proper safeguards are not put in place.
Patnaik said big tech companies have had a “free rein” and “today, companies are starting to realise that.”
Marcus agreed that companies can’t build a trustworthy infrastructure on their own, and that the outage is a reminder that “if we leave AI systems unregulated, they’ll either stumble or fail.”