Andriy Onufrienko | Moment | Getty Images
With election season underway and artificial intelligence rapidly evolving, AI manipulation in political advertising is becoming a major concern for markets and the economy. A new report released Wednesday by Moody’s warns that generative AI and deepfakes are among the election integrity issues that could pose a risk to U.S. institutional credibility.
“In a likely close election, concerns are growing that AI deepfakes could be used to mislead voters, exacerbate divisions, and stomp on discord,” Moody’s vice president and analyst Gregory Sobel and senior vice president William Foster wrote. “If successful, disinformation agents could sway voters and influence the election outcome, ultimately influencing policy decisions and undermining the credibility of U.S. institutions.”
The government is stepping up its efforts against deepfakes. On May 22, Federal Communications Commission Chair Jessica Rosenworcel proposed new rules that would require disclosure of whether television, video and radio political ads use AI-generated content. The FCC has been concerned about the use of AI in advertising this election cycle, with Rosenworcel pointing to potential problems with deepfakes and other manipulated content.
Social media is not subject to FCC regulation, but the Federal Election Commission is also considering broad AI disclosure rules that would apply to all platforms. In a letter to Rosenworcel, the commission urged the FCC to postpone a decision until after the election because the changes would not be mandatory for all digital political ads. The commission added that online ads without disclosures could confuse voters into thinking they are powered by AI when in fact they aren’t.
While the FCC proposal doesn’t fully cover social media, it paves the way for other agencies to regulate advertising in the digital world as the US government becomes known as a strong regulator of AI content — and perhaps these rules could apply to even more types of ads.
“This is a landmark decision that could change traditional media disclosures and advertising for political campaigns for years to come,” said Dan Ives, managing director and senior equity analyst at Wedbush Securities. “The concern is that you can’t put the genie back in the bottle, and there are a lot of unintended consequences to this decision.”
Some social media platforms have already voluntarily adopted some form of AI disclosure ahead of regulation. Meta, for example, requires AI disclosure for all ads and bans all new political ads for one week before the November election. Google requires disclosure for all political ads that contain altered content that “inaccurately portrays real people or events,” but does not require AI disclosure for all political ads.
Social media companies appear to be aggressively tackling this issue for good reason, as brands are concerned about contributing to the spread of misinformation at a critical time for the nation. Google and Facebook are expected to capture 47% of the $306.94 billion projected for U.S. digital ad spending in 2024. “This is the last thing big brands want as they focus on advertising amid a highly divisive election cycle and rampant AI-driven misinformation. It’s a very complicated time for online advertising,” Ives said.
Despite self-monitoring, AI-manipulated content will also appear on unlabeled platforms due to the sheer volume of content posted every day. Whether it’s AI-generated spam messages or masses of AI images, it’s hard to find it all.
“A lack of industry standards and rapid evolving technology make this effort difficult,” said Tony Adams, senior threat researcher at Secureworks’ Threat Prevention Unit. “The good news is that these platforms are reporting success in policing the most harmful content on their sites through technical controls, ironically powered by AI.”
It’s easier than ever to create manipulated content: In May, Moody’s warned that deepfakes are “already being weaponized” by governments and non-governmental organizations as a means of propaganda, social unrest and, at worst, terrorism.
“Until recently, creating a convincing deepfake required advanced technical sophistication in specialized algorithms, computing resources, and time,” wrote Avi Srivastava, assistant vice president at Moody’s Ratings. “With the advent of easily accessible and affordable Gen AI tools, creating a sophisticated deepfake can be completed in minutes. This accessibility, combined with the limitations of existing social media safeguards against the spread of manipulated content, creates fertile ground for widespread deepfake abuse.”
This election cycle, deepfake audio was used in robocalls during the New Hampshire presidential primary.
Potential silver linings are the decentralized nature of the U.S. election system, existing cybersecurity policies, and general knowledge of looming cyber threats, which Moody’s said could provide some protection. State and local governments have enacted measures to further block deepfakes and unlabeled AI content, but some state legislatures have slowed the process due to free speech laws and concerns about thwarting technological advances.
As of February, 50 AI-related bills, including ones focused on deepfakes, were being introduced in state legislatures every week, according to Moody’s. Thirteen states have enacted laws related to election interference and deepfakes, eight of which have been enacted since January.
Moody’s noted that the United States is vulnerable to cyber risks, ranking 10th out of 192 countries on the United Nations’ e-Government Development Index.
According to Moody’s, the public’s perception that deepfakes have the power to influence political outcomes, even without specific examples, “is enough to undermine public confidence in the electoral process and the reliability of government institutions, which is a credit risk.” The more anxious the public is about distinguishing fact from fiction, the greater the risk that they will disengage and distrust their government. “Such trends are negative for trust, could lead to increased political and societal risks, and could undermine the effectiveness of government institutions,” Moody’s wrote.
“Law enforcement and the FCC’s response may deter other domestic actors from using AI to mislead voters,” SecureWorks’ Adams said. “But foreign actors will undoubtedly continue to misuse generative AI tools and systems to interfere in American politics, as they have done for years. The message to voters is: stay calm, stay vigilant, and vote.”